"As they try to tackle a problem step by step, they run the risk of hallucinating at each step.
-
"As they try to tackle a problem step by step, they run the risk of hallucinating at each step. The errors can compound as they spend more time thinking.
The latest bots reveal each step to users, which means the users may see each error, too. Researchers have also found that in many cases, the steps displayed by a bot are unrelated to the answer it eventually delivers.
'What the system says it is thinking is not necessarily what it is thinking,' said Aryo Pradipta Gema, an A.I. researcher at the University of Edinburgh and a fellow at Anthropic."
No, you idiot, the AI is not 'thinking' at all. It was never thinking.
Of course the steps displayed have no causal relationship to the answer it gives, because it's just a statistically plausible paragraph.
Tech journalism really needs to up its game.
-
J jwcph@helvede.net shared this topic