@quinn I'm not sure I entirely agree with this sentence:
-
@quinn I'm not sure I entirely agree with this sentence:
"Not that they are slow and dumb — they are doing something wholly different than I am when I write this, or you as you read it."
https://www.emptywheel.net/2025/02/14/what-we-talk-about-when-we-talk-about-ai-part-one/
-
@quinn I'm not sure I entirely agree with this sentence:
"Not that they are slow and dumb — they are doing something wholly different than I am when I write this, or you as you read it."
https://www.emptywheel.net/2025/02/14/what-we-talk-about-when-we-talk-about-ai-part-one/
@quinn I don't think we know enough about how humans construct language to say that definitively. We know that copying and using patterns is a big way we learn language; I think that at least in part we might construct sentences in a similar way to how LLMs do it.
-
@quinn I don't think we know enough about how humans construct language to say that definitively. We know that copying and using patterns is a big way we learn language; I think that at least in part we might construct sentences in a similar way to how LLMs do it.
@quinn fair point that we use a much smaller "training set" than LLMs, though.
-
@quinn I'm not sure I entirely agree with this sentence:
"Not that they are slow and dumb — they are doing something wholly different than I am when I write this, or you as you read it."
https://www.emptywheel.net/2025/02/14/what-we-talk-about-when-we-talk-about-ai-part-one/
@evan @quinn I'm pretty sure that the way the subconscious part of the brain, the part that plays patterns (walking, for example), reacts automatically to inputs (you hear or see 2+2 and I know what has popped into your head, or when someone tosses you a ball unexpectedly, try not to let your arm move) or the part that finds patterns, I think that part of the brain is a lot closer to how these artificial neural networks work than all the detractors think.
-
@quinn I don't think we know enough about how humans construct language to say that definitively. We know that copying and using patterns is a big way we learn language; I think that at least in part we might construct sentences in a similar way to how LLMs do it.
@evan we do know quite a bit about how humans learn language, and under what circumstances they don't. there's a lot we know about the construction of language, linguistics is not a new field. we definitely know enough to know it's not the AI attention mechanism or ingestion of vast datasets. we're born wanting to say ba ba ba, but language and meaning are deeply embodied for the first few years. and really the all of them, but that's too far for most people to go.
-
@evan we do know quite a bit about how humans learn language, and under what circumstances they don't. there's a lot we know about the construction of language, linguistics is not a new field. we definitely know enough to know it's not the AI attention mechanism or ingestion of vast datasets. we're born wanting to say ba ba ba, but language and meaning are deeply embodied for the first few years. and really the all of them, but that's too far for most people to go.
@quinn OK. I agree that hearing the word "ball" when Mama rolls you the ball give a human a huge advantage in understanding the semantics of the word.
-
@quinn OK. I agree that hearing the word "ball" when Mama rolls you the ball give a human a huge advantage in understanding the semantics of the word.
@quinn Overall, I think that it's a bad idea to discount and devalue LLMs based on our ability to understand the basic principles and contrast them to the mechanics of human thought. "They don't work like us" is going to be true no matter what other intelligences we encounter, human-made or natural.
The emergence of complex phenomena from simple events at a massive scale is how we get human intelligence, and it's possible that's how we'll get machine intelligence, too.
-
@quinn Overall, I think that it's a bad idea to discount and devalue LLMs based on our ability to understand the basic principles and contrast them to the mechanics of human thought. "They don't work like us" is going to be true no matter what other intelligences we encounter, human-made or natural.
The emergence of complex phenomena from simple events at a massive scale is how we get human intelligence, and it's possible that's how we'll get machine intelligence, too.
@quinn The current generation of LLMs makes a great companion for a lot of tasks: research, coding, analysing. They are helpful but fallible, just like humans. Taking their output as a first step, and testing and verifying, can lead to some pretty good results.