People get mad when you call LLMs "spicy autocomplete" but my investigations into recreating and implementing small versions of this tech make me think that nick name is very accurate.
-
@futurebird Someone recently used term "Augmenting Intelligence" and I thought it describes much better.
It kind of implies something intelligent rather than probabilistic is going on though.
If I have a hat filled with quotations of wisdom and I pull one out and read it now and then some of the time it will align with what is going on and seem very perceptive.
If I have three hats with such quotes and they are labeled "good" "bad" and "cryptic" and I pick one based on the mood people might think I'm a genius.
-
It kind of implies something intelligent rather than probabilistic is going on though.
If I have a hat filled with quotations of wisdom and I pull one out and read it now and then some of the time it will align with what is going on and seem very perceptive.
If I have three hats with such quotes and they are labeled "good" "bad" and "cryptic" and I pick one based on the mood people might think I'm a genius.
@futurebird Very, very similar to magicians and unethical grifters performing cold reads.
-
It kind of implies something intelligent rather than probabilistic is going on though.
If I have a hat filled with quotations of wisdom and I pull one out and read it now and then some of the time it will align with what is going on and seem very perceptive.
If I have three hats with such quotes and they are labeled "good" "bad" and "cryptic" and I pick one based on the mood people might think I'm a genius.
I call them "Weighted Random Word [or Code] Machines." I have a friend who said he wasn't going to continue the conversation if I was using "slurs." I called him a Cogger Lover. -
@futurebird Someone recently used term "Augmenting Intelligence" and I thought it describes much better.
@liiwi @futurebird
It (LLM/Generative AI) doesn’t augment intelligence. If anything it conditions people to think less! -
It kind of implies something intelligent rather than probabilistic is going on though.
If I have a hat filled with quotations of wisdom and I pull one out and read it now and then some of the time it will align with what is going on and seem very perceptive.
If I have three hats with such quotes and they are labeled "good" "bad" and "cryptic" and I pick one based on the mood people might think I'm a genius.
@futurebird Godd point, there is also the question that can there be intelligence without identity?
-
@liiwi @futurebird
It (LLM/Generative AI) doesn’t augment intelligence. If anything it conditions people to think less!@raymaccarthy @futurebird The context was that it augments user, like a tool.
-
@raymaccarthy @futurebird The context was that it augments user, like a tool.
Got it!
-
@raymaccarthy @futurebird The context was that it augments user, like a tool.
@liiwi @futurebird
It's about the most useless computer tool I've ever seen.
It wastes user's time. -
People get mad when you call LLMs "spicy autocomplete" but my investigations into recreating and implementing small versions of this tech make me think that nick name is very accurate.
Basically, it's a method to predict the next content in a text file. The whole conversation between you and the LLM is one file, and the LLM tries to find the most likely next text based on the training data.
There is something significant here: LLMs were trained on internet forums and social media.
@futurebird maybe "sloppy autocomplete" would be better?
-
The early models did involve hundreds of people who were given multiple generations and clicked on the one that was the least idiotic for eight hours a day, but I think a lot of the newer ones just violate OpenAI/Anthropic's user agreements and use the existing models for reinforced feedback learning. (DeepSeek likely did this). They're likely also using the feedback users give GPT/Claude during use.
So yes, it's going to always have the bias of whatever the rules were printed out for those original employees as well as their own personal biases.
But it's not too over simplistic an explanation. These are next word (token) prediction machines. Each thing that's generated requires the entire context of text to be passed through the machine for each new word. The models themselves are also deterministic, it's just that the generator doesn't always pick the most likely next token. It might randomly select the token at 96% instead of 98% to introduce some variability. -
@futurebird Godd point, there is also the question that can there be intelligence without identity?
I have always thought there can be intelligence without identity.
A big part of intelligence seems to be about answering the question 'what happens next?' every moment of its existence. Answering this question covers everything from a dropping a ball to Relativity.
NOT saying this is all of intelligence, just one of its major tasks. This part doesn't need to have a 'me' in the model.
-
J jwcph@helvede.net shared this topic