@gabrielesvelto couldn’t agree more with this ethic. The psychological impacts of users ie society believing that LLMs are people and fufilling roles that actual humans should, will probably unfold over the years and decades. All because regulators circa 2024/5/6 believed it was over reach to demand LLMs don’t use anthropomorphic language and narrative style. Prompt: “what do you think?” Reply: “there is no “I”. This is a machine generated response, not a conscious self.” - sounds better to me.
orangefloss@mastodon.social
@orangefloss@mastodon.social
Indlæg
-
Don't anthropomorphize LLMs, language is important. -
The intention of the ultra-wealthy with LLMs is to turn workers as much as possible into identical, replaceable cogs.@lapcatsoftware you have to wonder if this’ll create more solidarity between blue and white collar workers, the latter which have borne the brunt of automation prior to 2025. If the overlords don’t have the same level of compliant administrators, do the administrators defect? Or is it a case of 1000 cuts and the frog in the pot analogy?