Skip to content
  • Hjem
  • Seneste
  • Etiketter
  • Populære
  • Verden
  • Bruger
  • Grupper
Temaer
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Kollaps
FARVEL BIG TECH
  1. Forside
  2. Ikke-kategoriseret
  3. Doctors Horrified After Google's Healthcare AI Makes Up a Body Part That Does Not Exist in Humans

Doctors Horrified After Google's Healthcare AI Makes Up a Body Part That Does Not Exist in Humans

Planlagt Fastgjort Låst Flyttet Ikke-kategoriseret
11 Indlæg 5 Posters 0 Visninger
  • Ældste til nyeste
  • Nyeste til ældste
  • Most Votes
Svar
  • Svar som emne
Login for at svare
Denne tråd er blevet slettet. Kun brugere med emne behandlings privilegier kan se den.
  • gerrymcgovern@mastodon.greenG This user is from outside of this forum
    gerrymcgovern@mastodon.greenG This user is from outside of this forum
    gerrymcgovern@mastodon.green
    wrote on sidst redigeret af
    #1

    Doctors Horrified After Google's Healthcare AI Makes Up a Body Part That Does Not Exist in Humans

    "What you’re talking about is super dangerous."

    https://futurism.com/neoscope/google-healthcare-ai-makes-up-body-part

    AI is an enormous pile of dangerous crap. It lies, lies, lies. Such an incredibly poor technology that's being sold as a god. When will enough people realize that AI is super-average and super-dodgy? The only thing it's really good at is destroying society.

    dpnash@c.imD 1 Reply Last reply
    1
    0
    • gerrymcgovern@mastodon.greenG gerrymcgovern@mastodon.green

      Doctors Horrified After Google's Healthcare AI Makes Up a Body Part That Does Not Exist in Humans

      "What you’re talking about is super dangerous."

      https://futurism.com/neoscope/google-healthcare-ai-makes-up-body-part

      AI is an enormous pile of dangerous crap. It lies, lies, lies. Such an incredibly poor technology that's being sold as a god. When will enough people realize that AI is super-average and super-dodgy? The only thing it's really good at is destroying society.

      dpnash@c.imD This user is from outside of this forum
      dpnash@c.imD This user is from outside of this forum
      dpnash@c.im
      wrote on sidst redigeret af
      #2

      @gerrymcgovern One of the first prompts I gave ChatGPT back in 2022 came from my main hobby (amateur astronomy). I asked it to tell me something about the extrasolar planets orbiting the star [[fake star ID]].

      [[fake star ID]] was something that anyone who knew how to use Wikipedia half-intelligently could verify was fake within a few minutes.

      I wasn't even trying to be deceptive; I genuinely wanted to see how ChatGPT would handle a request for information that I knew couldn't be in its training data.

      The torrents of bullshit it produced -- paragraph after paragraph of totally confabulated data about these nonexistent planets orbiting a nonexistent star -- told me everything I needed to know about ChatGPT and its buddies, and I've never been tempted to use them for anything serious since.

      jzakotnik@mastodon.socialJ 1 Reply Last reply
      0
      • pelle@veganism.socialP pelle@veganism.social shared this topic on
      • dpnash@c.imD dpnash@c.im

        @gerrymcgovern One of the first prompts I gave ChatGPT back in 2022 came from my main hobby (amateur astronomy). I asked it to tell me something about the extrasolar planets orbiting the star [[fake star ID]].

        [[fake star ID]] was something that anyone who knew how to use Wikipedia half-intelligently could verify was fake within a few minutes.

        I wasn't even trying to be deceptive; I genuinely wanted to see how ChatGPT would handle a request for information that I knew couldn't be in its training data.

        The torrents of bullshit it produced -- paragraph after paragraph of totally confabulated data about these nonexistent planets orbiting a nonexistent star -- told me everything I needed to know about ChatGPT and its buddies, and I've never been tempted to use them for anything serious since.

        jzakotnik@mastodon.socialJ This user is from outside of this forum
        jzakotnik@mastodon.socialJ This user is from outside of this forum
        jzakotnik@mastodon.social
        wrote sidst redigeret af
        #3

        @dpnash @gerrymcgovern I’m not sure where all the outrage comes from. I see cases over cases which don’t make any sense for an LLM and then people complain that it doesn’t work. It’s a tool for a couple of good use cases (primarily to generate content in language or programming) and not for others. It seems y‘all believe all the propaganda by the AI vendors and then complain about it afterwards. Why?

        malte@radikal.socialM 1 Reply Last reply
        0
        • jzakotnik@mastodon.socialJ jzakotnik@mastodon.social

          @dpnash @gerrymcgovern I’m not sure where all the outrage comes from. I see cases over cases which don’t make any sense for an LLM and then people complain that it doesn’t work. It’s a tool for a couple of good use cases (primarily to generate content in language or programming) and not for others. It seems y‘all believe all the propaganda by the AI vendors and then complain about it afterwards. Why?

          malte@radikal.socialM This user is from outside of this forum
          malte@radikal.socialM This user is from outside of this forum
          malte@radikal.social
          wrote sidst redigeret af malte@radikal.social
          #4

          @jzakotnik The outrage I read above is not about LLM and complaining that "it doesn't work", it's about the mass deception currently working in society telling us that LLM does work in all these cases @dpnash @gerrymcgovern

          dpnash@c.imD 1 Reply Last reply
          0
          • malte@radikal.socialM malte@radikal.social

            @jzakotnik The outrage I read above is not about LLM and complaining that "it doesn't work", it's about the mass deception currently working in society telling us that LLM does work in all these cases @dpnash @gerrymcgovern

            dpnash@c.imD This user is from outside of this forum
            dpnash@c.imD This user is from outside of this forum
            dpnash@c.im
            wrote sidst redigeret af
            #5

            @malte @jzakotnik @gerrymcgovern Correct. The failure mode I observed here -- namely, "spews bullshit when confronted with a question it doesn't have good data for" -- is an absolutely terrible failure mode when someone is trying to use an LLM to answer a factual question. It's *far* worse than simply saying "I don't know" or "I can't answer that". It's even worse when the LLMs deliver bullshit in a pleasantly confident tone, as they consistently do. And "answering factual questions" is what LLMs have been advertised as "good for" ever since ChatGPT 3.5 was released in 2022.

            jhavok@mstdn.partyJ 1 Reply Last reply
            0
            • dpnash@c.imD dpnash@c.im

              @malte @jzakotnik @gerrymcgovern Correct. The failure mode I observed here -- namely, "spews bullshit when confronted with a question it doesn't have good data for" -- is an absolutely terrible failure mode when someone is trying to use an LLM to answer a factual question. It's *far* worse than simply saying "I don't know" or "I can't answer that". It's even worse when the LLMs deliver bullshit in a pleasantly confident tone, as they consistently do. And "answering factual questions" is what LLMs have been advertised as "good for" ever since ChatGPT 3.5 was released in 2022.

              jhavok@mstdn.partyJ This user is from outside of this forum
              jhavok@mstdn.partyJ This user is from outside of this forum
              jhavok@mstdn.party
              wrote sidst redigeret af
              #6

              @dpnash @malte @jzakotnik @gerrymcgovern My recent encounter involved "AI Overview" repeatedly telling me that a Sailfish is a gamefish despite my repeated prompts intended to get the sail area of a boat. Not sure why it wouldn't have data for that, since the simple Google search brought it up once I waded through the garbage that I hadn't actually asked for .

              malte@radikal.socialM 1 Reply Last reply
              0
              • jhavok@mstdn.partyJ jhavok@mstdn.party

                @dpnash @malte @jzakotnik @gerrymcgovern My recent encounter involved "AI Overview" repeatedly telling me that a Sailfish is a gamefish despite my repeated prompts intended to get the sail area of a boat. Not sure why it wouldn't have data for that, since the simple Google search brought it up once I waded through the garbage that I hadn't actually asked for .

                malte@radikal.socialM This user is from outside of this forum
                malte@radikal.socialM This user is from outside of this forum
                malte@radikal.social
                wrote sidst redigeret af
                #7

                @jhavok LLM engines can't do logic. They give the appearance of being able to do it on the surface. If you know what you're talking about, better not look for answers there. If you have no clue, and the people you're trying to impress also don't have a clue, then LLM results can often seem vaguely true. This is basically how I see these "results".

                jhavok@mstdn.partyJ 2 Replies Last reply
                0
                • malte@radikal.socialM malte@radikal.social

                  @jhavok LLM engines can't do logic. They give the appearance of being able to do it on the surface. If you know what you're talking about, better not look for answers there. If you have no clue, and the people you're trying to impress also don't have a clue, then LLM results can often seem vaguely true. This is basically how I see these "results".

                  jhavok@mstdn.partyJ This user is from outside of this forum
                  jhavok@mstdn.partyJ This user is from outside of this forum
                  jhavok@mstdn.party
                  wrote sidst redigeret af
                  #8

                  @malte LLMs do grammatical logic, which is what makes them so useless, since grammatical logic innately appears to be good while innately being flawed. It's a problem that philosophers struggled with and eventually decided wasn't solvable.

                  malte@radikal.socialM 1 Reply Last reply
                  0
                  • jhavok@mstdn.partyJ jhavok@mstdn.party

                    @malte LLMs do grammatical logic, which is what makes them so useless, since grammatical logic innately appears to be good while innately being flawed. It's a problem that philosophers struggled with and eventually decided wasn't solvable.

                    malte@radikal.socialM This user is from outside of this forum
                    malte@radikal.socialM This user is from outside of this forum
                    malte@radikal.social
                    wrote sidst redigeret af
                    #9

                    @jhavok You got a point

                    1 Reply Last reply
                    0
                    • malte@radikal.socialM malte@radikal.social

                      @jhavok LLM engines can't do logic. They give the appearance of being able to do it on the surface. If you know what you're talking about, better not look for answers there. If you have no clue, and the people you're trying to impress also don't have a clue, then LLM results can often seem vaguely true. This is basically how I see these "results".

                      jhavok@mstdn.partyJ This user is from outside of this forum
                      jhavok@mstdn.partyJ This user is from outside of this forum
                      jhavok@mstdn.party
                      wrote sidst redigeret af
                      #10

                      @malte It's the horoscope effect.

                      malte@radikal.socialM 1 Reply Last reply
                      0
                      • jhavok@mstdn.partyJ jhavok@mstdn.party

                        @malte It's the horoscope effect.

                        malte@radikal.socialM This user is from outside of this forum
                        malte@radikal.socialM This user is from outside of this forum
                        malte@radikal.social
                        wrote sidst redigeret af
                        #11

                        @jhavok That's a really great way to describe it actually. I've never thought of the two together... Assuming you're thinking about the Barnum effect. It's definitely a similar kind trick that makes LLM convincing.

                        1 Reply Last reply
                        0
                        Svar
                        • Svar som emne
                        Login for at svare
                        • Ældste til nyeste
                        • Nyeste til ældste
                        • Most Votes


                        • Log ind

                        • Har du ikke en konto? Tilmeld

                        • Login or register to search.
                        Powered by NodeBB Contributors
                        Graciously hosted by data.coop
                        • First post
                          Last post
                        0
                        • Hjem
                        • Seneste
                        • Etiketter
                        • Populære
                        • Verden
                        • Bruger
                        • Grupper