Skip to content
  • Hjem
  • Seneste
  • Etiketter
  • Populære
  • Verden
  • Bruger
  • Grupper
Temaer
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Kollaps
FARVEL BIG TECH
  1. Forside
  2. Ikke-kategoriseret
  3. The power of ChatGPT

The power of ChatGPT

Planlagt Fastgjort Låst Flyttet Ikke-kategoriseret
76 Indlæg 65 Posters 2 Visninger
  • Ældste til nyeste
  • Nyeste til ældste
  • Most Votes
Svar
  • Svar som emne
Login for at svare
Denne tråd er blevet slettet. Kun brugere med emne behandlings privilegier kan se den.
  • loucyx@mastodon.socialL loucyx@mastodon.social

    @jafo @GossiTheDog even then, a reliable source of information should be consistent, meaning both Kevin and you should have gotten the same result, but we all know LLMs aren't consistent (even when the same user asks the same question) so if anything, you added more evidence proving we should avoid LLMs 🤷🏻‍♀️

    benjamineskola@hachyderm.ioB This user is from outside of this forum
    benjamineskola@hachyderm.ioB This user is from outside of this forum
    benjamineskola@hachyderm.io
    wrote sidst redigeret af
    #64

    @loucyx @jafo @GossiTheDog it's also not even correct, so what you've managed to get there is a different wrong answer.

    If you think 'confidentaly incorrect' is an improvement over 'obvious gibberish', then yeah, I suppose this is preferable, but it doesn't get you any closer to the truth.

    (personally I think 'obviously wrong' is preferable, because then at least you know to ignore it.)

    jafo@inuh.netJ 1 Reply Last reply
    0
    • tempusfelix@wehavecookies.socialT tempusfelix@wehavecookies.social

      @alice @GossiTheDog

      The image appears to be a screenshot of Ai answer which is wrong in every sense, and when asked who was the first openly gay radio presenter on a specific national radio station was provides answers that are incorrect in multiple dimensions.

      The answer to the question was Kenny Everett, but it doesn’t seem to know that.

      technicaladept@techhub.socialT This user is from outside of this forum
      technicaladept@techhub.socialT This user is from outside of this forum
      technicaladept@techhub.social
      wrote sidst redigeret af
      #65

      @tempusfelix @alice @GossiTheDog Though Kenny was openly gay by the late Eighties and was certainly one of the first Radio 1 presenters in the Sixties. I don't think he was openly gay at the same time that he was presenting at Radio 1

      1 Reply Last reply
      0
      • gossithedog@cyberplace.socialG gossithedog@cyberplace.social

        The power of ChatGPT

        idren@mstdn.caI This user is from outside of this forum
        idren@mstdn.caI This user is from outside of this forum
        idren@mstdn.ca
        wrote sidst redigeret af
        #66

        @GossiTheDog no homo gay is still gay in the digital cat fart world lol

        1 Reply Last reply
        0
        • benjamineskola@hachyderm.ioB benjamineskola@hachyderm.io

          @loucyx @jafo @GossiTheDog it's also not even correct, so what you've managed to get there is a different wrong answer.

          If you think 'confidentaly incorrect' is an improvement over 'obvious gibberish', then yeah, I suppose this is preferable, but it doesn't get you any closer to the truth.

          (personally I think 'obviously wrong' is preferable, because then at least you know to ignore it.)

          jafo@inuh.netJ This user is from outside of this forum
          jafo@inuh.netJ This user is from outside of this forum
          jafo@inuh.net
          wrote sidst redigeret af
          #67

          @benjamineskola @loucyx @GossiTheDog What do you consider a correct answer? According to the respective Wikipedia entries for them, the answer I got seems to be correct. The answer ChatGPT gave me linked to citations which also seemed to back up the answer. https://www.theguardian.com/tv-and-radio/2022/aug/25/farewell-scott-mills-bbc-radio-1?utm_source=chatgpt.com https://en.wikipedia.org/wiki/Kevin_Greening?utm_source=chatgpt.com https://en.wikipedia.org/wiki/Scott_Mills?utm_source=chatgpt.com

          benjamineskola@hachyderm.ioB 1 Reply Last reply
          0
          • jafo@inuh.netJ jafo@inuh.net

            @benjamineskola @loucyx @GossiTheDog What do you consider a correct answer? According to the respective Wikipedia entries for them, the answer I got seems to be correct. The answer ChatGPT gave me linked to citations which also seemed to back up the answer. https://www.theguardian.com/tv-and-radio/2022/aug/25/farewell-scott-mills-bbc-radio-1?utm_source=chatgpt.com https://en.wikipedia.org/wiki/Kevin_Greening?utm_source=chatgpt.com https://en.wikipedia.org/wiki/Scott_Mills?utm_source=chatgpt.com

            benjamineskola@hachyderm.ioB This user is from outside of this forum
            benjamineskola@hachyderm.ioB This user is from outside of this forum
            benjamineskola@hachyderm.io
            wrote sidst redigeret af
            #68

            @jafo @loucyx @GossiTheDog Elsewhere in this thread, Kenny Everett was claimed to be the first — but the timeline might be wrong for that, depending when he actually came out.

            1 Reply Last reply
            0
            • loucyx@mastodon.socialL loucyx@mastodon.social

              @jafo @GossiTheDog even then, a reliable source of information should be consistent, meaning both Kevin and you should have gotten the same result, but we all know LLMs aren't consistent (even when the same user asks the same question) so if anything, you added more evidence proving we should avoid LLMs 🤷🏻‍♀️

              jafo@inuh.netJ This user is from outside of this forum
              jafo@inuh.netJ This user is from outside of this forum
              jafo@inuh.net
              wrote sidst redigeret af
              #69

              @loucyx @GossiTheDog I don't know about you, but I've long ago learned to not just blindly trust tools I use, on the Internet and elsewhere. I use tools understanding the limitations, and check the work. In this case, it seemed like outside sources corroborated the assertions ChatGPT made. I can't speak to Kevin's answer, because no information on WHAT ChatGPT was given; as I said, I used "Thinking-Standard" to get my answer, YMMV if you use other models.

              benjamineskola@hachyderm.ioB 1 Reply Last reply
              0
              • jafo@inuh.netJ jafo@inuh.net

                @loucyx @GossiTheDog I don't know about you, but I've long ago learned to not just blindly trust tools I use, on the Internet and elsewhere. I use tools understanding the limitations, and check the work. In this case, it seemed like outside sources corroborated the assertions ChatGPT made. I can't speak to Kevin's answer, because no information on WHAT ChatGPT was given; as I said, I used "Thinking-Standard" to get my answer, YMMV if you use other models.

                benjamineskola@hachyderm.ioB This user is from outside of this forum
                benjamineskola@hachyderm.ioB This user is from outside of this forum
                benjamineskola@hachyderm.io
                wrote sidst redigeret af
                #70

                @jafo @loucyx @GossiTheDog but your mileage should not vary. that's the point.

                getting a different answer each time is what makes these tools not fit for purpose. if they return the right answer some of the time but you never know which times, what's the point in them?

                loucyx@mastodon.socialL 1 Reply Last reply
                0
                • benjamineskola@hachyderm.ioB benjamineskola@hachyderm.io

                  @jafo @loucyx @GossiTheDog but your mileage should not vary. that's the point.

                  getting a different answer each time is what makes these tools not fit for purpose. if they return the right answer some of the time but you never know which times, what's the point in them?

                  loucyx@mastodon.socialL This user is from outside of this forum
                  loucyx@mastodon.socialL This user is from outside of this forum
                  loucyx@mastodon.social
                  wrote sidst redigeret af
                  #71

                  @benjamineskola @jafo @GossiTheDog 100% this! If they were always right or always wrong it would be one thing, but the only constant is that they are always confident about their answer (either if it’s right or wrong) which is what makes them dangerously unreliable.

                  And this isn’t even getting into the whole detrimental effect they have on cognitive analysis and reasoning for LLM consumers.

                  musiciankate@hcommons.socialM 1 Reply Last reply
                  0
                  • loucyx@mastodon.socialL loucyx@mastodon.social

                    @benjamineskola @jafo @GossiTheDog 100% this! If they were always right or always wrong it would be one thing, but the only constant is that they are always confident about their answer (either if it’s right or wrong) which is what makes them dangerously unreliable.

                    And this isn’t even getting into the whole detrimental effect they have on cognitive analysis and reasoning for LLM consumers.

                    musiciankate@hcommons.socialM This user is from outside of this forum
                    musiciankate@hcommons.socialM This user is from outside of this forum
                    musiciankate@hcommons.social
                    wrote sidst redigeret af
                    #72

                    @benjamineskola @GossiTheDog @loucyx @jafo It’s the difference between lying and bullshitting. Lying at least has a regard for what is true. Bullshitting doesn’t care if it is correct or not.

                    1 Reply Last reply
                    0
                    • tempusfelix@wehavecookies.socialT tempusfelix@wehavecookies.social

                      @alice @GossiTheDog

                      The image appears to be a screenshot of Ai answer which is wrong in every sense, and when asked who was the first openly gay radio presenter on a specific national radio station was provides answers that are incorrect in multiple dimensions.

                      The answer to the question was Kenny Everett, but it doesn’t seem to know that.

                      lexr@chaos.socialL This user is from outside of this forum
                      lexr@chaos.socialL This user is from outside of this forum
                      lexr@chaos.social
                      wrote sidst redigeret af
                      #73

                      @tempusfelix @alice @GossiTheDog
                      Fwiw describing that image as "chatgpt being wrong" is like describing this one as "a funny tweet". Technically correct but not actually very helpful to a visually impaired person trying to understand the joke.

                      1 Reply Last reply
                      0
                      • jafo@inuh.netJ jafo@inuh.net

                        @GossiTheDog Cannot confirm. ChatGPT Thinking-Standard.

                        jafo@inuh.netJ This user is from outside of this forum
                        jafo@inuh.netJ This user is from outside of this forum
                        jafo@inuh.net
                        wrote sidst redigeret af
                        #74

                        @GossiTheDog If we assume this is the answer that LLMs are able to give, does this change the argument we are having? As far as I can tell from other sources I've reviewed, this answer seems to be reasonable.

                        1 Reply Last reply
                        0
                        • gossithedog@cyberplace.socialG gossithedog@cyberplace.social

                          The power of ChatGPT

                          heybenji@social.coopH This user is from outside of this forum
                          heybenji@social.coopH This user is from outside of this forum
                          heybenji@social.coop
                          wrote sidst redigeret af
                          #75

                          @GossiTheDog this is hilarious and if it had alt text I’d boost it.

                          1 Reply Last reply
                          0
                          • gossithedog@cyberplace.socialG gossithedog@cyberplace.social

                            The power of ChatGPT

                            wanderingindigitalworlds@sunny.gardenW This user is from outside of this forum
                            wanderingindigitalworlds@sunny.gardenW This user is from outside of this forum
                            wanderingindigitalworlds@sunny.garden
                            wrote sidst redigeret af
                            #76

                            @GossiTheDog This is what techbros are uplifting as that which will replace human workers...They are a fucking joke.

                            They will never get humans out of the workplace with a bullshitting bundle of stolen data and weighted tokens.

                            1 Reply Last reply
                            0
                            • pelle@veganism.socialP pelle@veganism.social shared this topic
                            Svar
                            • Svar som emne
                            Login for at svare
                            • Ældste til nyeste
                            • Nyeste til ældste
                            • Most Votes


                            • Log ind

                            • Har du ikke en konto? Tilmeld

                            • Login or register to search.
                            Powered by NodeBB Contributors
                            Graciously hosted by data.coop
                            • First post
                              Last post
                            0
                            • Hjem
                            • Seneste
                            • Etiketter
                            • Populære
                            • Verden
                            • Bruger
                            • Grupper