Researchers just mathematically proved that AI can't recursively self-improve its way to superintelligence.
-
Researchers just mathematically proved that AI can't recursively self-improve its way to superintelligence.
Not "we think it's unlikely." Not "it seems hard." Formally proved.
The model doesn't climb toward AGI — it slowly forgets what reality looks like. They call it model collapse. The math calls it inevitable.
I wrote about it
https://smsk.dev/2026/04/26/ai-cannot-self-improve-and-math-behind-proves-it/
@devsimsek This was my intuition as soon as I understood that they were fundamentally just a statistical distribution predictive model: of course if you feed the output of the statistics machine back into itself it's going to degrade as a model, that's just how statistical modeling works... But still nice that someone actually "did the math" to prove it though.
What's particularly interesting about this process from what I understand is that in isolation none of the synthetic data looks "wrong" which is what makes it so 'tempting' for the bubble-pumpers desperate for training data. And despite none of it looking that bad, the entire model can easily collapse into an incoherent pile of gibberish with enough of it due to subtle statistical butterflies.
-
Researchers just mathematically proved that AI can't recursively self-improve its way to superintelligence.
Not "we think it's unlikely." Not "it seems hard." Formally proved.
The model doesn't climb toward AGI — it slowly forgets what reality looks like. They call it model collapse. The math calls it inevitable.
I wrote about it
https://smsk.dev/2026/04/26/ai-cannot-self-improve-and-math-behind-proves-it/
@devsimsek sicko-to-sicko communication
-
Researchers just mathematically proved that AI can't recursively self-improve its way to superintelligence.
Not "we think it's unlikely." Not "it seems hard." Formally proved.
The model doesn't climb toward AGI — it slowly forgets what reality looks like. They call it model collapse. The math calls it inevitable.
I wrote about it
https://smsk.dev/2026/04/26/ai-cannot-self-improve-and-math-behind-proves-it/
@devsimsek Compare how cryptographic RNGs are usually Pseudo-RNGs fed with entropy, and which fail to output random-approximate values (of a given strength) once the entropy falls too low.
It's almost as if there is a pattern to this.
-
Researchers just mathematically proved that AI can't recursively self-improve its way to superintelligence.
Not "we think it's unlikely." Not "it seems hard." Formally proved.
The model doesn't climb toward AGI — it slowly forgets what reality looks like. They call it model collapse. The math calls it inevitable.
I wrote about it
https://smsk.dev/2026/04/26/ai-cannot-self-improve-and-math-behind-proves-it/
i'm here for the inevitable model collapse. let's immanentize this bitch!
-
Researchers just mathematically proved that AI can't recursively self-improve its way to superintelligence.
Not "we think it's unlikely." Not "it seems hard." Formally proved.
The model doesn't climb toward AGI — it slowly forgets what reality looks like. They call it model collapse. The math calls it inevitable.
I wrote about it
https://smsk.dev/2026/04/26/ai-cannot-self-improve-and-math-behind-proves-it/
@devsimsek So... let me get this straight. Autocoprophagic #RSI •doesn't• lead to #AGI? Say it ain't so!
#AI -
@devsimsek Is that a thing people believe, that LLMs generate themselves towards the singularity simply by eating their own output and no other feedback?
@Quantensalat @devsimsek I'm sure you'll find plenty of straw men who do
-
@devsimsek Chatting with U Toronto AI profs 6, 7 years ago, I posed a problem.
"Teach your AI everything about whole, integer, rational and real numbers. Ask it to solve a problem that requires it to invent complex numbers."
Reply: "Oh... It doesn't work that way."
I knew that, but the ability to frame your observations as the product of a higher order system is IMHO key to what we call "intelligence". Collecting evidence that can disprove your hypothesis is science.
LLM approaches are neither, in a very expensive way.
I'll have to read the paper, though. I'm looking forward to the AI equivalent of Goedel's Theorem that shuts down this annoying iteration of the field.
@TallSimon @devsimsek I haven’t looked at the proof, but I wonder if Gödel plays a role in it. Seems like at least Gödel would strongly imply this new proof.
-
Researchers just mathematically proved that AI can't recursively self-improve its way to superintelligence.
Not "we think it's unlikely." Not "it seems hard." Formally proved.
The model doesn't climb toward AGI — it slowly forgets what reality looks like. They call it model collapse. The math calls it inevitable.
I wrote about it
https://smsk.dev/2026/04/26/ai-cannot-self-improve-and-math-behind-proves-it/
@devsimsek this is one of those things that seemed intuitive to us skeptics but it's great to see it proven
-
@Quantensalat @devsimsek I'm sure you'll find plenty of straw men who do
@dpiponi @devsimsek I find the paper interesting but I would like to understand the exact
premises. "AI" is not equal to gen AI or LLMs, it probably makes little sense to sell it as a general statement about "AI" -
Researchers just mathematically proved that AI can't recursively self-improve its way to superintelligence.
Not "we think it's unlikely." Not "it seems hard." Formally proved.
The model doesn't climb toward AGI — it slowly forgets what reality looks like. They call it model collapse. The math calls it inevitable.
I wrote about it
https://smsk.dev/2026/04/26/ai-cannot-self-improve-and-math-behind-proves-it/
@devsimsek wow, almost as if this was a problem known as overtraining for well over 30 years
-
Researchers just mathematically proved that AI can't recursively self-improve its way to superintelligence.
Not "we think it's unlikely." Not "it seems hard." Formally proved.
The model doesn't climb toward AGI — it slowly forgets what reality looks like. They call it model collapse. The math calls it inevitable.
I wrote about it
https://smsk.dev/2026/04/26/ai-cannot-self-improve-and-math-behind-proves-it/
@devsimsek it's the only thing that makes sense if you know just a little about how they work (I don't know more than a little)
Like if you output whatever is most likely, and input that again, it's only logical (at least to me) that eventually you'll get a mushy average -
Researchers just mathematically proved that AI can't recursively self-improve its way to superintelligence.
Not "we think it's unlikely." Not "it seems hard." Formally proved.
The model doesn't climb toward AGI — it slowly forgets what reality looks like. They call it model collapse. The math calls it inevitable.
I wrote about it
https://smsk.dev/2026/04/26/ai-cannot-self-improve-and-math-behind-proves-it/
@devsimsek This feels like a weird argument, because it proves a version that I've never heard anyone arguing for. Like, when I've heard people talk about AI itself accelerating AI's improvement (on both pro and con sides), the argument wasn't that AI would self-train on its own output. The argument was that AI would replace AI developers and accelerate the development of better AI code.
-
@devsimsek wow, almost as if this was a problem known as overtraining for well over 30 years
@SRAZKVT Exactly.
-
Researchers just mathematically proved that AI can't recursively self-improve its way to superintelligence.
Not "we think it's unlikely." Not "it seems hard." Formally proved.
The model doesn't climb toward AGI — it slowly forgets what reality looks like. They call it model collapse. The math calls it inevitable.
I wrote about it
https://smsk.dev/2026/04/26/ai-cannot-self-improve-and-math-behind-proves-it/
@devsimsek Nobody ever claimed that llms get better by being trained on their own synthetic data. This blog post is very misleading.
The idea of self-improvement and singularity is that llms write improved versions of their own codebase and perform the research and experiments for coming up with better models themselves.
The idea of singularity is interesting but also full of hidden assumptions. I'm always confused when people act like singularity would exist. It's just science fiction. -
@dpiponi @devsimsek I find the paper interesting but I would like to understand the exact
premises. "AI" is not equal to gen AI or LLMs, it probably makes little sense to sell it as a general statement about "AI"@Quantensalat @dpiponi That's what I hate about these companies.
-
@devsimsek So... let me get this straight. Autocoprophagic #RSI •doesn't• lead to #AGI? Say it ain't so!
#AI@ghostinthenet Yep

-
@devsimsek also see https://berryvilleiml.com/2026/01/10/recursive-pollution-and-model-collapse-are-not-the-same/
This is part of a long running #ML research thread with big #MLsec impact
@noplasticshower Thanks, ill look into it
-
@anne_twain YEP, most of the people whom commented assumes they develop every iteration with fresh data; that comes from internet ...
-
@Quantensalat @dpiponi That's what I hate about these companies.
@devsimsek @dpiponi that they act like AI=LLMs?
-
@anne_twain @devsimsek
"That's like a high school history class having their own essays as research material." - a memorable phrase.