We have to keep reminding people of this:
Ikke-kategoriseret
1
Indlæg
1
Posters
2
Visninger
-
We have to keep reminding people of this:
"Chatbots — LLMs — do not know facts and are not designed to be able to accurately answer factual questions. They are designed to find and mimic patterns of words, probabilistically. When they’re “right” it’s because correct things are often written down, so those patterns are frequent. That’s all.
When a chatbot gets something wrong, it’s not because it made an error. It’s because on that roll of the dice, it happened to string together a group of words that, when read by a human, represents something false. But it was working entirely as designed. It was supposed to make a sentence & it did." #chatGPT #AI
from Katie Mack on BSky -
T tanyakaroli@expressional.social shared this topic