Don't anthropomorphize LLMs, language is important.
-
Don't anthropomorphize LLMs, language is important. Say "the bot generated some text" not "the AI replied". Use "this document contains machine-generated text" not "this work is AI-assisted". See how people squirm when you call out their slop this way.
-
Don't anthropomorphize LLMs, language is important. Say "the bot generated some text" not "the AI replied". Use "this document contains machine-generated text" not "this work is AI-assisted". See how people squirm when you call out their slop this way.
oooorrrr...... 'the clanker clanked out some text'!
"this document contains clanker-sourced text droppings'!

-
Don't anthropomorphize LLMs, language is important. Say "the bot generated some text" not "the AI replied". Use "this document contains machine-generated text" not "this work is AI-assisted". See how people squirm when you call out their slop this way.
@gabrielesvelto this is true, it's really strange seeing non-tech people around me talk abt LLMs as if it was a sentient being, it's kinda unsettling .
Still, we need to make sure there's no lack of responsibility for the operators or users of these programs. With the hole story about the blogpost generated autonomously using an LLM as a response to a FOSS-Maintainers AU-Policy, some people kinda forgot that there is some person responsible for setting it up that way and for letting the program loose.
-
@gabrielesvelto @kinou Yeah, it's unclear how much of this is human-directed, and how much is automated. Like, if a bot is trained on aggressive attempts to get patches merged, then that's the behavior it will emulate. Or an actual human could be directing it to act like an asshole in an attempt to get patches merged.
@Andres4NY @gabrielesvelto @kinou well, but an LLM does not have "behavior" as a property. It is just programmed to match particular patterns of words. I think that's related to the distinction the OP is making.
-
@bit absolutely, and it gives people the impression that they have failure modes, which they don't. Their output is text which they cannot verify, so whether the text is factually right or wrong is irrelevant. Both are valid and completely expected outputs.
@gabrielesvelto @bit This! This really needs to be widely understood.
-
Don't anthropomorphize LLMs, language is important. Say "the bot generated some text" not "the AI replied". Use "this document contains machine-generated text" not "this work is AI-assisted". See how people squirm when you call out their slop this way.
@gabrielesvelto the worst cases of this, is when people say "chatgpt said..." as if an AI could talk. Or "chatgpt thinks..." as if an AI could think.
-
J jwcph@helvede.net shared this topic