Skip to content
  • Hjem
  • Seneste
  • Etiketter
  • Populære
  • Verden
  • Bruger
  • Grupper
Temaer
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Kollaps
FARVEL BIG TECH
  1. Forside
  2. Ikke-kategoriseret
  3. The more people rely on generative AI, the more the internet becomes swamped with error-strewn content which then gets hoovered back up into LLM training datasets.

The more people rely on generative AI, the more the internet becomes swamped with error-strewn content which then gets hoovered back up into LLM training datasets.

Planlagt Fastgjort Låst Flyttet Ikke-kategoriseret
3 Indlæg 1 Posters 0 Visninger
  • Ældste til nyeste
  • Nyeste til ældste
  • Most Votes
Svar
  • Svar som emne
Login for at svare
Denne tråd er blevet slettet. Kun brugere med emne behandlings privilegier kan se den.
  • apostateenglishman@mastodon.worldA This user is from outside of this forum
    apostateenglishman@mastodon.worldA This user is from outside of this forum
    apostateenglishman@mastodon.world
    wrote on sidst redigeret af
    #1

    The more people rely on generative AI, the more the internet becomes swamped with error-strewn content which then gets hoovered back up into LLM training datasets. The cumulative long-term impact of all this is a gradual degradation at the societal level not only of a population's capacity for critical consumption, but of the quality of the knowledge-base itself. Couple that with AI being deliberately used by governments and Big Tech to influence public opinion, and it's hello techno-dystopia.

    apostateenglishman@mastodon.worldA 1 Reply Last reply
    0
    • apostateenglishman@mastodon.worldA apostateenglishman@mastodon.world

      The more people rely on generative AI, the more the internet becomes swamped with error-strewn content which then gets hoovered back up into LLM training datasets. The cumulative long-term impact of all this is a gradual degradation at the societal level not only of a population's capacity for critical consumption, but of the quality of the knowledge-base itself. Couple that with AI being deliberately used by governments and Big Tech to influence public opinion, and it's hello techno-dystopia.

      apostateenglishman@mastodon.worldA This user is from outside of this forum
      apostateenglishman@mastodon.worldA This user is from outside of this forum
      apostateenglishman@mastodon.world
      wrote on sidst redigeret af
      #2

      ...and here it is, the clearest evidence yet of a fascist tech bro deciding the current corpus of human knowledge needs "correcting", and openly stating his intention to use his LLM to manipulate public understanding.

      When I called current iterations of AI "Joseph Goebbels' wet dream", it wasn't hyperbole. Musk is talking about rewriting history to suit himself. As George Orwell said, he who controls the past controls the future.

      Do you really think other chatbots aren't being similarly gamed?

      apostateenglishman@mastodon.worldA 1 Reply Last reply
      1
      0
      • apostateenglishman@mastodon.worldA apostateenglishman@mastodon.world

        ...and here it is, the clearest evidence yet of a fascist tech bro deciding the current corpus of human knowledge needs "correcting", and openly stating his intention to use his LLM to manipulate public understanding.

        When I called current iterations of AI "Joseph Goebbels' wet dream", it wasn't hyperbole. Musk is talking about rewriting history to suit himself. As George Orwell said, he who controls the past controls the future.

        Do you really think other chatbots aren't being similarly gamed?

        apostateenglishman@mastodon.worldA This user is from outside of this forum
        apostateenglishman@mastodon.worldA This user is from outside of this forum
        apostateenglishman@mastodon.world
        wrote sidst redigeret af
        #3

        It's not that LLMs *could* be used to manipulate public opinion. This isn't hypothetical.

        It's already happening.

        "The Trump administration is currently pressuring OpenAI and other AI companies to make their models more conservative-friendly. An executive order decreed that government agencies may not procure "woke" AI models that feature "incorporation of concepts like critical race theory, transgenderism, unconscious bias, intersectionality, and systemic racism.""

        https://www.theverge.com/news/798388/openai-chatgpt-political-bias-eval

        1 Reply Last reply
        2
        0
        • anderslund@expressional.socialA anderslund@expressional.social shared this topic
          jeppe@uddannelse.socialJ jeppe@uddannelse.social shared this topic
          jwcph@helvede.netJ jwcph@helvede.net shared this topic
        Svar
        • Svar som emne
        Login for at svare
        • Ældste til nyeste
        • Nyeste til ældste
        • Most Votes


        • Log ind

        • Har du ikke en konto? Tilmeld

        • Login or register to search.
        Powered by NodeBB Contributors
        Graciously hosted by data.coop
        • First post
          Last post
        0
        • Hjem
        • Seneste
        • Etiketter
        • Populære
        • Verden
        • Bruger
        • Grupper