Skip to content
  • Hjem
  • Seneste
  • Etiketter
  • Populære
  • Verden
  • Bruger
  • Grupper
Temaer
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Kollaps
FARVEL BIG TECH
  1. Forside
  2. Ikke-kategoriseret
  3. The power of ChatGPT

The power of ChatGPT

Planlagt Fastgjort Låst Flyttet Ikke-kategoriseret
76 Indlæg 65 Posters 2 Visninger
  • Ældste til nyeste
  • Nyeste til ældste
  • Most Votes
Svar
  • Svar som emne
Login for at svare
Denne tråd er blevet slettet. Kun brugere med emne behandlings privilegier kan se den.
  • farshidhakimy@chaos.socialF farshidhakimy@chaos.social

    @GossiTheDog "we should resist the legacy definition of the word 'solved' in this exciting era of AI"
    https://youtu.be/tnYaExb5JvM?t=2m9s

    vikingchieftain@mstdn.socialV This user is from outside of this forum
    vikingchieftain@mstdn.socialV This user is from outside of this forum
    vikingchieftain@mstdn.social
    wrote sidst redigeret af
    #52

    @farshidhakimy 🤣🤣🤣

    Thanks Farshid! This clip was very funny. It was like an interview from the dotcom bubble era, but with an AI CEO. Wild times ahead.

    @GossiTheDog

    1 Reply Last reply
    0
    • lbcp@social.linux.pizzaL lbcp@social.linux.pizza

      @GossiTheDog Turns out, some of them gays have just been some radio guys.

      vikingchieftain@mstdn.socialV This user is from outside of this forum
      vikingchieftain@mstdn.socialV This user is from outside of this forum
      vikingchieftain@mstdn.social
      wrote sidst redigeret af
      #53

      @lbcp @GossiTheDog

      Gays... Guys... the words look the same, so they must be the same.

      1 Reply Last reply
      0
      • simonzerafa@infosec.exchangeS simonzerafa@infosec.exchange

        @GossiTheDog

        I would love a calculator that is correct 80% of the time that requires me to check every calculation for accuracy 😕🤦‍♂️

        vikingchieftain@mstdn.socialV This user is from outside of this forum
        vikingchieftain@mstdn.socialV This user is from outside of this forum
        vikingchieftain@mstdn.social
        wrote sidst redigeret af
        #54

        @simonzerafa @GossiTheDog

        Nah, just live with the 20% chance that the next moon landing will bury the astronauts 20 metres underground. Trump will approve of this because it will distract people from the Epstein files for a period of time.

        1 Reply Last reply
        0
        • gossithedog@cyberplace.socialG gossithedog@cyberplace.social

          The power of ChatGPT

          pizzademon@mastodon.onlineP This user is from outside of this forum
          pizzademon@mastodon.onlineP This user is from outside of this forum
          pizzademon@mastodon.online
          wrote sidst redigeret af
          #55

          @GossiTheDog

          1 Reply Last reply
          0
          • wotsac@mastodon.socialW wotsac@mastodon.social

            @alice @GossiTheDog

            The answer is too good not to share:

            Q: Who were the first openly gay radio hosts on Radio 1?

            A: The first openly gay presenters on BBC Radio 1 were Simon Mayo and Phill Jupitus-but with an important nuance.

            * Simon Mayo is not gay, so he doesn't count in that sense.
            • Phill Jupitus is also not gay.

            The commonly accepted answer is actually:
            Chris Evans (not gay) and others hosted the station early on, but they weren't openly gay.

            bzdev@fosstodon.orgB This user is from outside of this forum
            bzdev@fosstodon.orgB This user is from outside of this forum
            bzdev@fosstodon.org
            wrote sidst redigeret af
            #56

            @wotsac @alice @GossiTheDog

            ChatGPT must have been trained on the 1954 version of the film Sabrina, where a French chef in a cooking school tells his class that a souffle must be "gay" (different connotation in 1954).

            https://www.youtube.com/watch?v=FR7KlvISE2w

            1 Reply Last reply
            0
            • gossithedog@cyberplace.socialG gossithedog@cyberplace.social

              The power of ChatGPT

              patrick@hatoya.cafeP This user is from outside of this forum
              patrick@hatoya.cafeP This user is from outside of this forum
              patrick@hatoya.cafe
              wrote sidst redigeret af
              #57

              @GossiTheDog@cyberplace.social ​​

              1 Reply Last reply
              0
              • gossithedog@cyberplace.socialG gossithedog@cyberplace.social

                The power of ChatGPT

                maddiefuzz@masto.hackers.townM This user is from outside of this forum
                maddiefuzz@masto.hackers.townM This user is from outside of this forum
                maddiefuzz@masto.hackers.town
                wrote sidst redigeret af
                #58

                @GossiTheDog You can refer to me as Maddie (not gay)

                1 Reply Last reply
                0
                • gossithedog@cyberplace.socialG gossithedog@cyberplace.social

                  The power of ChatGPT

                  cinebox@masto.hackers.townC This user is from outside of this forum
                  cinebox@masto.hackers.townC This user is from outside of this forum
                  cinebox@masto.hackers.town
                  wrote sidst redigeret af
                  #59

                  @GossiTheDog reminds me of https://www.youtube.com/watch?v=OMdPj3HXMgQ

                  1 Reply Last reply
                  0
                  • jafo@inuh.netJ jafo@inuh.net

                    @GossiTheDog Cannot confirm. ChatGPT Thinking-Standard.

                    loucyx@mastodon.socialL This user is from outside of this forum
                    loucyx@mastodon.socialL This user is from outside of this forum
                    loucyx@mastodon.social
                    wrote sidst redigeret af
                    #60

                    @jafo @GossiTheDog even then, a reliable source of information should be consistent, meaning both Kevin and you should have gotten the same result, but we all know LLMs aren't consistent (even when the same user asks the same question) so if anything, you added more evidence proving we should avoid LLMs 🤷🏻‍♀️

                    benjamineskola@hachyderm.ioB jafo@inuh.netJ 2 Replies Last reply
                    0
                    • gossithedog@cyberplace.socialG gossithedog@cyberplace.social

                      The power of ChatGPT

                      foobarry@mastodon.socialF This user is from outside of this forum
                      foobarry@mastodon.socialF This user is from outside of this forum
                      foobarry@mastodon.social
                      wrote sidst redigeret af
                      #61

                      @GossiTheDog the right answer is probably Kenny Everett

                      1 Reply Last reply
                      0
                      • gossithedog@cyberplace.socialG gossithedog@cyberplace.social

                        The power of ChatGPT

                        kofzmann@toots.nuK This user is from outside of this forum
                        kofzmann@toots.nuK This user is from outside of this forum
                        kofzmann@toots.nu
                        wrote sidst redigeret af
                        #62

                        @GossiTheDog imagine how much fossil fuel was used to generate that sophisticated answer. Any such energy calculation must include all resources required to build the data sets that are
                        required for the system to perform the operation.

                        1 Reply Last reply
                        0
                        • gossithedog@cyberplace.socialG gossithedog@cyberplace.social

                          The power of ChatGPT

                          criticalbackend@tech.lgbtC This user is from outside of this forum
                          criticalbackend@tech.lgbtC This user is from outside of this forum
                          criticalbackend@tech.lgbt
                          wrote sidst redigeret af
                          #63

                          @GossiTheDog decided to experiment with AI by asking which philosopher was run over by a milk float and it did a similar thing.

                          If you know who it actually was, that would be much appreciated!

                          1 Reply Last reply
                          0
                          • loucyx@mastodon.socialL loucyx@mastodon.social

                            @jafo @GossiTheDog even then, a reliable source of information should be consistent, meaning both Kevin and you should have gotten the same result, but we all know LLMs aren't consistent (even when the same user asks the same question) so if anything, you added more evidence proving we should avoid LLMs 🤷🏻‍♀️

                            benjamineskola@hachyderm.ioB This user is from outside of this forum
                            benjamineskola@hachyderm.ioB This user is from outside of this forum
                            benjamineskola@hachyderm.io
                            wrote sidst redigeret af
                            #64

                            @loucyx @jafo @GossiTheDog it's also not even correct, so what you've managed to get there is a different wrong answer.

                            If you think 'confidentaly incorrect' is an improvement over 'obvious gibberish', then yeah, I suppose this is preferable, but it doesn't get you any closer to the truth.

                            (personally I think 'obviously wrong' is preferable, because then at least you know to ignore it.)

                            jafo@inuh.netJ 1 Reply Last reply
                            0
                            • tempusfelix@wehavecookies.socialT tempusfelix@wehavecookies.social

                              @alice @GossiTheDog

                              The image appears to be a screenshot of Ai answer which is wrong in every sense, and when asked who was the first openly gay radio presenter on a specific national radio station was provides answers that are incorrect in multiple dimensions.

                              The answer to the question was Kenny Everett, but it doesn’t seem to know that.

                              technicaladept@techhub.socialT This user is from outside of this forum
                              technicaladept@techhub.socialT This user is from outside of this forum
                              technicaladept@techhub.social
                              wrote sidst redigeret af
                              #65

                              @tempusfelix @alice @GossiTheDog Though Kenny was openly gay by the late Eighties and was certainly one of the first Radio 1 presenters in the Sixties. I don't think he was openly gay at the same time that he was presenting at Radio 1

                              1 Reply Last reply
                              0
                              • gossithedog@cyberplace.socialG gossithedog@cyberplace.social

                                The power of ChatGPT

                                idren@mstdn.caI This user is from outside of this forum
                                idren@mstdn.caI This user is from outside of this forum
                                idren@mstdn.ca
                                wrote sidst redigeret af
                                #66

                                @GossiTheDog no homo gay is still gay in the digital cat fart world lol

                                1 Reply Last reply
                                0
                                • benjamineskola@hachyderm.ioB benjamineskola@hachyderm.io

                                  @loucyx @jafo @GossiTheDog it's also not even correct, so what you've managed to get there is a different wrong answer.

                                  If you think 'confidentaly incorrect' is an improvement over 'obvious gibberish', then yeah, I suppose this is preferable, but it doesn't get you any closer to the truth.

                                  (personally I think 'obviously wrong' is preferable, because then at least you know to ignore it.)

                                  jafo@inuh.netJ This user is from outside of this forum
                                  jafo@inuh.netJ This user is from outside of this forum
                                  jafo@inuh.net
                                  wrote sidst redigeret af
                                  #67

                                  @benjamineskola @loucyx @GossiTheDog What do you consider a correct answer? According to the respective Wikipedia entries for them, the answer I got seems to be correct. The answer ChatGPT gave me linked to citations which also seemed to back up the answer. https://www.theguardian.com/tv-and-radio/2022/aug/25/farewell-scott-mills-bbc-radio-1?utm_source=chatgpt.com https://en.wikipedia.org/wiki/Kevin_Greening?utm_source=chatgpt.com https://en.wikipedia.org/wiki/Scott_Mills?utm_source=chatgpt.com

                                  benjamineskola@hachyderm.ioB 1 Reply Last reply
                                  0
                                  • jafo@inuh.netJ jafo@inuh.net

                                    @benjamineskola @loucyx @GossiTheDog What do you consider a correct answer? According to the respective Wikipedia entries for them, the answer I got seems to be correct. The answer ChatGPT gave me linked to citations which also seemed to back up the answer. https://www.theguardian.com/tv-and-radio/2022/aug/25/farewell-scott-mills-bbc-radio-1?utm_source=chatgpt.com https://en.wikipedia.org/wiki/Kevin_Greening?utm_source=chatgpt.com https://en.wikipedia.org/wiki/Scott_Mills?utm_source=chatgpt.com

                                    benjamineskola@hachyderm.ioB This user is from outside of this forum
                                    benjamineskola@hachyderm.ioB This user is from outside of this forum
                                    benjamineskola@hachyderm.io
                                    wrote sidst redigeret af
                                    #68

                                    @jafo @loucyx @GossiTheDog Elsewhere in this thread, Kenny Everett was claimed to be the first — but the timeline might be wrong for that, depending when he actually came out.

                                    1 Reply Last reply
                                    0
                                    • loucyx@mastodon.socialL loucyx@mastodon.social

                                      @jafo @GossiTheDog even then, a reliable source of information should be consistent, meaning both Kevin and you should have gotten the same result, but we all know LLMs aren't consistent (even when the same user asks the same question) so if anything, you added more evidence proving we should avoid LLMs 🤷🏻‍♀️

                                      jafo@inuh.netJ This user is from outside of this forum
                                      jafo@inuh.netJ This user is from outside of this forum
                                      jafo@inuh.net
                                      wrote sidst redigeret af
                                      #69

                                      @loucyx @GossiTheDog I don't know about you, but I've long ago learned to not just blindly trust tools I use, on the Internet and elsewhere. I use tools understanding the limitations, and check the work. In this case, it seemed like outside sources corroborated the assertions ChatGPT made. I can't speak to Kevin's answer, because no information on WHAT ChatGPT was given; as I said, I used "Thinking-Standard" to get my answer, YMMV if you use other models.

                                      benjamineskola@hachyderm.ioB 1 Reply Last reply
                                      0
                                      • jafo@inuh.netJ jafo@inuh.net

                                        @loucyx @GossiTheDog I don't know about you, but I've long ago learned to not just blindly trust tools I use, on the Internet and elsewhere. I use tools understanding the limitations, and check the work. In this case, it seemed like outside sources corroborated the assertions ChatGPT made. I can't speak to Kevin's answer, because no information on WHAT ChatGPT was given; as I said, I used "Thinking-Standard" to get my answer, YMMV if you use other models.

                                        benjamineskola@hachyderm.ioB This user is from outside of this forum
                                        benjamineskola@hachyderm.ioB This user is from outside of this forum
                                        benjamineskola@hachyderm.io
                                        wrote sidst redigeret af
                                        #70

                                        @jafo @loucyx @GossiTheDog but your mileage should not vary. that's the point.

                                        getting a different answer each time is what makes these tools not fit for purpose. if they return the right answer some of the time but you never know which times, what's the point in them?

                                        loucyx@mastodon.socialL 1 Reply Last reply
                                        0
                                        • benjamineskola@hachyderm.ioB benjamineskola@hachyderm.io

                                          @jafo @loucyx @GossiTheDog but your mileage should not vary. that's the point.

                                          getting a different answer each time is what makes these tools not fit for purpose. if they return the right answer some of the time but you never know which times, what's the point in them?

                                          loucyx@mastodon.socialL This user is from outside of this forum
                                          loucyx@mastodon.socialL This user is from outside of this forum
                                          loucyx@mastodon.social
                                          wrote sidst redigeret af
                                          #71

                                          @benjamineskola @jafo @GossiTheDog 100% this! If they were always right or always wrong it would be one thing, but the only constant is that they are always confident about their answer (either if it’s right or wrong) which is what makes them dangerously unreliable.

                                          And this isn’t even getting into the whole detrimental effect they have on cognitive analysis and reasoning for LLM consumers.

                                          musiciankate@hcommons.socialM 1 Reply Last reply
                                          0
                                          Svar
                                          • Svar som emne
                                          Login for at svare
                                          • Ældste til nyeste
                                          • Nyeste til ældste
                                          • Most Votes


                                          • Log ind

                                          • Har du ikke en konto? Tilmeld

                                          • Login or register to search.
                                          Powered by NodeBB Contributors
                                          Graciously hosted by data.coop
                                          • First post
                                            Last post
                                          0
                                          • Hjem
                                          • Seneste
                                          • Etiketter
                                          • Populære
                                          • Verden
                                          • Bruger
                                          • Grupper