Skip to content
  • Hjem
  • Seneste
  • Etiketter
  • Populære
  • Verden
  • Bruger
  • Grupper
Temaer
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Kollaps
FARVEL BIG TECH
  1. Forside
  2. Ikke-kategoriseret
  3. "Generative AI systems suffer from what is known as “hallucinations,” which means that they often confidently state something that is not true.

"Generative AI systems suffer from what is known as “hallucinations,” which means that they often confidently state something that is not true.

Planlagt Fastgjort Låst Flyttet Ikke-kategoriseret
generativeaieducationschoolscriticalthinkin
1 Indlæg 1 Posters 1 Visninger
  • Ældste til nyeste
  • Nyeste til ældste
  • Most Votes
Svar
  • Svar som emne
Login for at svare
Denne tråd er blevet slettet. Kun brugere med emne behandlings privilegier kan se den.
  • remixtures@tldr.nettime.orgR This user is from outside of this forum
    remixtures@tldr.nettime.orgR This user is from outside of this forum
    remixtures@tldr.nettime.org
    wrote sidst redigeret af
    #1

    "Generative AI systems suffer from what is known as “hallucinations,” which means that they often confidently state something that is not true. Some hallucinations are just silly and are easy to spot and dismiss. Others involve made-up facts which, delivered by an AI system believed to be objective and all-knowing, can lead students to a misguided and potentially dangerous understanding of important events, histories, or political choices.

    Generative AI systems suffer from hallucinations because they rely on largescale pattern recognition. When prompted with a question or request for information, they identify related material in their database and then assemble a set of words or images, based on probabilities, that “best” satisfies the inquiry. They do not “think” or “reason” and thus their output cannot be predicted, can change in response to repeated identical prompts, and may not be reliable.

    As OpenAI researchers explained in a recent paper, large language models will always be prone to generating plausible but false outputs, even with perfect data, due to “epistemic uncertainty when information appeared rarely in training data, model limitations where tasks exceeded current architectures’ representational capacity, and computational intractability where even superintelligent systems could not solve cryptographically hard problems.” And this is not a problem that can be solved by scaling up and boosting the compute power of these systems. In fact, numerous studies have shown that more advanced AI models actually hallucinate more than previous simpler ones.

    The temptation to use AI and accept its output as truth is great. Even professionals who should know better have succumbed. We have examples of lawyers using AI to write their briefs and judges using AI to write their decisions."

    https://socialistproject.ca/2025/10/ai-and-education/

    #AI #GenerativeAI #Education #Schools #CriticalThinking #Hallucinations

    1 Reply Last reply
    1
    0
    • jwcph@helvede.netJ jwcph@helvede.net shared this topic
    Svar
    • Svar som emne
    Login for at svare
    • Ældste til nyeste
    • Nyeste til ældste
    • Most Votes


    • Log ind

    • Har du ikke en konto? Tilmeld

    • Login or register to search.
    Powered by NodeBB Contributors
    Graciously hosted by data.coop
    • First post
      Last post
    0
    • Hjem
    • Seneste
    • Etiketter
    • Populære
    • Verden
    • Bruger
    • Grupper