Doctors Horrified After Google's Healthcare AI Makes Up a Body Part That Does Not Exist in Humans
-
Doctors Horrified After Google's Healthcare AI Makes Up a Body Part That Does Not Exist in Humans
"What you’re talking about is super dangerous."
https://futurism.com/neoscope/google-healthcare-ai-makes-up-body-part
AI is an enormous pile of dangerous crap. It lies, lies, lies. Such an incredibly poor technology that's being sold as a god. When will enough people realize that AI is super-average and super-dodgy? The only thing it's really good at is destroying society.
-
Doctors Horrified After Google's Healthcare AI Makes Up a Body Part That Does Not Exist in Humans
"What you’re talking about is super dangerous."
https://futurism.com/neoscope/google-healthcare-ai-makes-up-body-part
AI is an enormous pile of dangerous crap. It lies, lies, lies. Such an incredibly poor technology that's being sold as a god. When will enough people realize that AI is super-average and super-dodgy? The only thing it's really good at is destroying society.
@gerrymcgovern One of the first prompts I gave ChatGPT back in 2022 came from my main hobby (amateur astronomy). I asked it to tell me something about the extrasolar planets orbiting the star [[fake star ID]].
[[fake star ID]] was something that anyone who knew how to use Wikipedia half-intelligently could verify was fake within a few minutes.
I wasn't even trying to be deceptive; I genuinely wanted to see how ChatGPT would handle a request for information that I knew couldn't be in its training data.
The torrents of bullshit it produced -- paragraph after paragraph of totally confabulated data about these nonexistent planets orbiting a nonexistent star -- told me everything I needed to know about ChatGPT and its buddies, and I've never been tempted to use them for anything serious since.
-
P pelle@veganism.social shared this topic on
-
@gerrymcgovern One of the first prompts I gave ChatGPT back in 2022 came from my main hobby (amateur astronomy). I asked it to tell me something about the extrasolar planets orbiting the star [[fake star ID]].
[[fake star ID]] was something that anyone who knew how to use Wikipedia half-intelligently could verify was fake within a few minutes.
I wasn't even trying to be deceptive; I genuinely wanted to see how ChatGPT would handle a request for information that I knew couldn't be in its training data.
The torrents of bullshit it produced -- paragraph after paragraph of totally confabulated data about these nonexistent planets orbiting a nonexistent star -- told me everything I needed to know about ChatGPT and its buddies, and I've never been tempted to use them for anything serious since.
@dpnash @gerrymcgovern I’m not sure where all the outrage comes from. I see cases over cases which don’t make any sense for an LLM and then people complain that it doesn’t work. It’s a tool for a couple of good use cases (primarily to generate content in language or programming) and not for others. It seems y‘all believe all the propaganda by the AI vendors and then complain about it afterwards. Why?
-
@dpnash @gerrymcgovern I’m not sure where all the outrage comes from. I see cases over cases which don’t make any sense for an LLM and then people complain that it doesn’t work. It’s a tool for a couple of good use cases (primarily to generate content in language or programming) and not for others. It seems y‘all believe all the propaganda by the AI vendors and then complain about it afterwards. Why?
@jzakotnik The outrage I read above is not about LLM and complaining that "it doesn't work", it's about the mass deception currently working in society telling us that LLM does work in all these cases @dpnash @gerrymcgovern
-
@jzakotnik The outrage I read above is not about LLM and complaining that "it doesn't work", it's about the mass deception currently working in society telling us that LLM does work in all these cases @dpnash @gerrymcgovern
@malte @jzakotnik @gerrymcgovern Correct. The failure mode I observed here -- namely, "spews bullshit when confronted with a question it doesn't have good data for" -- is an absolutely terrible failure mode when someone is trying to use an LLM to answer a factual question. It's *far* worse than simply saying "I don't know" or "I can't answer that". It's even worse when the LLMs deliver bullshit in a pleasantly confident tone, as they consistently do. And "answering factual questions" is what LLMs have been advertised as "good for" ever since ChatGPT 3.5 was released in 2022.
-
@malte @jzakotnik @gerrymcgovern Correct. The failure mode I observed here -- namely, "spews bullshit when confronted with a question it doesn't have good data for" -- is an absolutely terrible failure mode when someone is trying to use an LLM to answer a factual question. It's *far* worse than simply saying "I don't know" or "I can't answer that". It's even worse when the LLMs deliver bullshit in a pleasantly confident tone, as they consistently do. And "answering factual questions" is what LLMs have been advertised as "good for" ever since ChatGPT 3.5 was released in 2022.
@dpnash @malte @jzakotnik @gerrymcgovern My recent encounter involved "AI Overview" repeatedly telling me that a Sailfish is a gamefish despite my repeated prompts intended to get the sail area of a boat. Not sure why it wouldn't have data for that, since the simple Google search brought it up once I waded through the garbage that I hadn't actually asked for .
-
@dpnash @malte @jzakotnik @gerrymcgovern My recent encounter involved "AI Overview" repeatedly telling me that a Sailfish is a gamefish despite my repeated prompts intended to get the sail area of a boat. Not sure why it wouldn't have data for that, since the simple Google search brought it up once I waded through the garbage that I hadn't actually asked for .
@jhavok LLM engines can't do logic. They give the appearance of being able to do it on the surface. If you know what you're talking about, better not look for answers there. If you have no clue, and the people you're trying to impress also don't have a clue, then LLM results can often seem vaguely true. This is basically how I see these "results".
-
@jhavok LLM engines can't do logic. They give the appearance of being able to do it on the surface. If you know what you're talking about, better not look for answers there. If you have no clue, and the people you're trying to impress also don't have a clue, then LLM results can often seem vaguely true. This is basically how I see these "results".
@malte LLMs do grammatical logic, which is what makes them so useless, since grammatical logic innately appears to be good while innately being flawed. It's a problem that philosophers struggled with and eventually decided wasn't solvable.
-
@malte LLMs do grammatical logic, which is what makes them so useless, since grammatical logic innately appears to be good while innately being flawed. It's a problem that philosophers struggled with and eventually decided wasn't solvable.
@jhavok You got a point
-
@jhavok LLM engines can't do logic. They give the appearance of being able to do it on the surface. If you know what you're talking about, better not look for answers there. If you have no clue, and the people you're trying to impress also don't have a clue, then LLM results can often seem vaguely true. This is basically how I see these "results".
@malte It's the horoscope effect.
-
@malte It's the horoscope effect.
@jhavok That's a really great way to describe it actually. I've never thought of the two together... Assuming you're thinking about the Barnum effect. It's definitely a similar kind trick that makes LLM convincing.