The more people rely on generative AI, the more the internet becomes swamped with error-strewn content which then gets hoovered back up into LLM training datasets.
-
The more people rely on generative AI, the more the internet becomes swamped with error-strewn content which then gets hoovered back up into LLM training datasets. The cumulative long-term impact of all this is a gradual degradation at the societal level not only of a population's capacity for critical consumption, but of the quality of the knowledge-base itself. Couple that with AI being deliberately used by governments and Big Tech to influence public opinion, and it's hello techno-dystopia.
-
The more people rely on generative AI, the more the internet becomes swamped with error-strewn content which then gets hoovered back up into LLM training datasets. The cumulative long-term impact of all this is a gradual degradation at the societal level not only of a population's capacity for critical consumption, but of the quality of the knowledge-base itself. Couple that with AI being deliberately used by governments and Big Tech to influence public opinion, and it's hello techno-dystopia.
...and here it is, the clearest evidence yet of a fascist tech bro deciding the current corpus of human knowledge needs "correcting", and openly stating his intention to use his LLM to manipulate public understanding.
When I called current iterations of AI "Joseph Goebbels' wet dream", it wasn't hyperbole. Musk is talking about rewriting history to suit himself. As George Orwell said, he who controls the past controls the future.
Do you really think other chatbots aren't being similarly gamed?
-
...and here it is, the clearest evidence yet of a fascist tech bro deciding the current corpus of human knowledge needs "correcting", and openly stating his intention to use his LLM to manipulate public understanding.
When I called current iterations of AI "Joseph Goebbels' wet dream", it wasn't hyperbole. Musk is talking about rewriting history to suit himself. As George Orwell said, he who controls the past controls the future.
Do you really think other chatbots aren't being similarly gamed?
It's not that LLMs *could* be used to manipulate public opinion. This isn't hypothetical.
It's already happening.
"The Trump administration is currently pressuring OpenAI and other AI companies to make their models more conservative-friendly. An executive order decreed that government agencies may not procure "woke" AI models that feature "incorporation of concepts like critical race theory, transgenderism, unconscious bias, intersectionality, and systemic racism.""
https://www.theverge.com/news/798388/openai-chatgpt-political-bias-eval
-
A anderslund@expressional.social shared this topic
J jeppe@uddannelse.social shared this topic
J jwcph@helvede.net shared this topic