Don't anthropomorphize LLMs, language is important.
-
At least potted plants are living things.
And nobody tries to say a washing machine will magically birth AGI (as far as I know).
It's not the "talking to things" part that's madness. It's the belief that a machine that can match tokens and spit out some text that resembles a valid reply is a sign of true intelligence.
When I punch in
5 * 5into a calculator and hit=, I shouldn't ascribe the glowing25to any machine intelligence. It should be the same for LLM powered genAI, but that "natural language" throws us off. Our brains aren't used to dealing with (often) coherent language generated by an unthinking statistical engine doing math on giant matrices. -
Don't anthropomorphize LLMs, language is important. Say "the bot generated some text" not "the AI replied". Use "this document contains machine-generated text" not "this work is AI-assisted". See how people squirm when you call out their slop this way.
@gabrielesvelto couldn’t agree more with this ethic. The psychological impacts of users ie society believing that LLMs are people and fufilling roles that actual humans should, will probably unfold over the years and decades. All because regulators circa 2024/5/6 believed it was over reach to demand LLMs don’t use anthropomorphic language and narrative style. Prompt: “what do you think?” Reply: “there is no “I”. This is a machine generated response, not a conscious self.” - sounds better to me.
-
Don't anthropomorphize LLMs, language is important. Say "the bot generated some text" not "the AI replied". Use "this document contains machine-generated text" not "this work is AI-assisted". See how people squirm when you call out their slop this way.
@gabrielesvelto I'm trying to get people to use the neologism "apokrisoid" for an answer-shaped object. The LLM does not and cannot produce actual answers.
#apokrisoid -
Don't anthropomorphize LLMs, language is important. Say "the bot generated some text" not "the AI replied". Use "this document contains machine-generated text" not "this work is AI-assisted". See how people squirm when you call out their slop this way.
@gabrielesvelto Exactly. But the media (and hence the public) like to use short-forms, whether accurate of not. I do a presentation to folks about AI (The Good, The Bad and The Ugly), after which everybody keeps referring to "AI", not machine language. !!!!!
-
Don't anthropomorphize LLMs, language is important. Say "the bot generated some text" not "the AI replied". Use "this document contains machine-generated text" not "this work is AI-assisted". See how people squirm when you call out their slop this way.
@gabrielesvelto i try to limit my LLM use because it's fundamentally evil, but whenever i do use it i never treat it like a person. i believe that's how people become addicted to chatbots. it is not an intelligent being with experiences and feelings, it's a cold machine that just uses an algorithm to arrange words from a database in a way said algorithm is tweaked to sound like human writing. our brains struggle to understand that, which is how you end up with people abandoning their real friends for AI bots and even considering them to be romantic partners.
also i've heard people say shit like "i always say thank you whenever i ask the AI for help with something so they'll hopefully spare me when the robot uprising comes", and i can't honestly tell if they're joking or not. if not, maybe we should be fighting against the people who are funding these robots you're so scared of? by the way, i highly doubt any sort of robot uprising will happen anytime soon, chatGPT has a fucking existential crisis if you do something as simple as ask for the non-existent seahorse emoji, it's not smart -
Don't anthropomorphize LLMs, language is important. Say "the bot generated some text" not "the AI replied". Use "this document contains machine-generated text" not "this work is AI-assisted". See how people squirm when you call out their slop this way.
-
Don't anthropomorphize LLMs, language is important. Say "the bot generated some text" not "the AI replied". Use "this document contains machine-generated text" not "this work is AI-assisted". See how people squirm when you call out their slop this way.
oooorrrr...... 'the clanker clanked out some text'!
"this document contains clanker-sourced text droppings'!

-
Don't anthropomorphize LLMs, language is important. Say "the bot generated some text" not "the AI replied". Use "this document contains machine-generated text" not "this work is AI-assisted". See how people squirm when you call out their slop this way.
@gabrielesvelto this is true, it's really strange seeing non-tech people around me talk abt LLMs as if it was a sentient being, it's kinda unsettling .
Still, we need to make sure there's no lack of responsibility for the operators or users of these programs. With the hole story about the blogpost generated autonomously using an LLM as a response to a FOSS-Maintainers AU-Policy, some people kinda forgot that there is some person responsible for setting it up that way and for letting the program loose.
-
@gabrielesvelto @kinou Yeah, it's unclear how much of this is human-directed, and how much is automated. Like, if a bot is trained on aggressive attempts to get patches merged, then that's the behavior it will emulate. Or an actual human could be directing it to act like an asshole in an attempt to get patches merged.
@Andres4NY @gabrielesvelto @kinou well, but an LLM does not have "behavior" as a property. It is just programmed to match particular patterns of words. I think that's related to the distinction the OP is making.
-
@bit absolutely, and it gives people the impression that they have failure modes, which they don't. Their output is text which they cannot verify, so whether the text is factually right or wrong is irrelevant. Both are valid and completely expected outputs.
@gabrielesvelto @bit This! This really needs to be widely understood.
-
Don't anthropomorphize LLMs, language is important. Say "the bot generated some text" not "the AI replied". Use "this document contains machine-generated text" not "this work is AI-assisted". See how people squirm when you call out their slop this way.
@gabrielesvelto the worst cases of this, is when people say "chatgpt said..." as if an AI could talk. Or "chatgpt thinks..." as if an AI could think.
-
J jwcph@helvede.net shared this topic