Researchers just mathematically proved that AI can't recursively self-improve its way to superintelligence.
-
> Human-generated data is irreplaceable. The “internet is running out of training data” problem just got mathematically formalized.
Yeah I think the AI con mob has realized this already (but of course not saying the quiet part out loud). With Satya whining about people calling it slop and the AI industry trying to force it down everyone's throats no matter the cost (e.g. Copilot) I think they realize that there is only so much internet and historical content they can use to train their models - now they want *you* to help train it for them. Prompt Claude to spit out some code, ask Copilot for a PR review, and _interact_ with it, pointing out where it was stupid, confirming when it did a good job, by virtue of interacting with an AI model you are improving it with this exact, essential human input.
@flaki And it's why companies like Atlassian keep sending out notices that they're going to start using all of the data you've been forced to put on their servers because they took away local licensing, and feeding it into their ditto machines.
-
Researchers just mathematically proved that AI can't recursively self-improve its way to superintelligence.
Not "we think it's unlikely." Not "it seems hard." Formally proved.
The model doesn't climb toward AGI — it slowly forgets what reality looks like. They call it model collapse. The math calls it inevitable.
I wrote about it
https://smsk.dev/2026/04/26/ai-cannot-self-improve-and-math-behind-proves-it/
@devsimsek and this is old math, old theory, old knowledge. Gods do I wish I'd kept the various papers.
We've literally known for over two decades that LLMs are dead-ends. It's why IBM spent billions hyper-focusing Watson. We already knew more context just made it worse, regardless of compute or method. It's not 'intelligence,' it's a bad search function. There's shit demonstrating that back to the 1980's.
-
@devsimsek I think AGI and self-improvement is possible. But definitely not with the technology (neural LLMs) that is being marketed as "AI" today.
I think that AGI needs to be able to think logically.@devsimsek@universeodon.com @LunaDragofelis@void.lgbt
if you make agi able to think logically then the world ends.
we need to stop all ai research. if you are researching ai, and are not actively trying to sabotage it, then everyone's going to die. -
@Quantensalat @devsimsek There's a setup around equations (9) and (10) where the distribution used for training the next generation is a linear combination of the distribution your current generation generates and external data. As the amount of external data goes to zero, you expect model collapse. This is hardly surprising. I don't know anyone who expects you can just keep training based on previous results and expect something radically new to happen. (Though something *useful* can happen - eg. you may improve performance this way. See "rectification" in flow-matching.)
Note that this doesn't rule out all forms of self-training - just one kind. As a concrete example, an LLM trained to generate code can learn from the output of the generated code. Such output is, in some sense, exogenous.
@dpiponi @Quantensalat @devsimsek that part, that is ultimately a rehash of well-known theory. THAT part IIRC goes back to like the 1940's or 1950's.
And it absolutely rules out all forms of 'self-training.' It is not just mathematically impossible but a total logical fallacy. How can a system with no reference make correct determinations? Simple: it can't.
-
@anne_twain @devsimsek this requires two components LLMs do not, cannot, and will not ever have. Intent and originality.
Researchers have done self-modifying targeted things. It takes no time at all for things to become impossible for humans to understand. This does not mean they are better. Usually they weren't. Even when hyper-focused with specific controls. -
@devsimsek this is one of those things that seemed intuitive to us skeptics but it's great to see it proven
@huxley @devsimsek doesn't scepticism and intuation mitigate each other?
-
@aka_quant_noir @devsimsek Oh I think we've achieved billionaire intelligence already. I just have a much dimmer view of billionaires.
@alahmnat @devsimsek
I think we're in the billionaire intelligence decline phase. They're going nuts. -
Researchers just mathematically proved that AI can't recursively self-improve its way to superintelligence.
Not "we think it's unlikely." Not "it seems hard." Formally proved.
The model doesn't climb toward AGI — it slowly forgets what reality looks like. They call it model collapse. The math calls it inevitable.
I wrote about it
https://smsk.dev/2026/04/26/ai-cannot-self-improve-and-math-behind-proves-it/
@devsimsek excellent. Thanks for the overview!
-
Researchers just mathematically proved that AI can't recursively self-improve its way to superintelligence.
Not "we think it's unlikely." Not "it seems hard." Formally proved.
The model doesn't climb toward AGI — it slowly forgets what reality looks like. They call it model collapse. The math calls it inevitable.
I wrote about it
https://smsk.dev/2026/04/26/ai-cannot-self-improve-and-math-behind-proves-it/
@devsimsek isn't the idea of self-improving AI that the AI modifies its code, so the underlying algorithm / architecture?
-
Researchers just mathematically proved that AI can't recursively self-improve its way to superintelligence.
Not "we think it's unlikely." Not "it seems hard." Formally proved.
The model doesn't climb toward AGI — it slowly forgets what reality looks like. They call it model collapse. The math calls it inevitable.
I wrote about it
https://smsk.dev/2026/04/26/ai-cannot-self-improve-and-math-behind-proves-it/
@devsimsek @qualia I think you claim too much here. As I understand it, this result deals only with the intrinsic failures of RL-flavored approaches and not things like self-play, let alone problems that might arise from merely very good AI that still outdoes humans economically.
And I largely agree! I'm glad that someone's finally formalized the intuition that synthetic data is sawdust to bulk out real-world data with and more carefully investigated catastrophic forgetting and the general weaknesses of gradient descent.
That said... to what extent did you have Claude write this post? Because the format is... distinctive.
-
@Quantensalat @devsimsek There's a setup around equations (9) and (10) where the distribution used for training the next generation is a linear combination of the distribution your current generation generates and external data. As the amount of external data goes to zero, you expect model collapse. This is hardly surprising. I don't know anyone who expects you can just keep training based on previous results and expect something radically new to happen. (Though something *useful* can happen - eg. you may improve performance this way. See "rectification" in flow-matching.)
Note that this doesn't rule out all forms of self-training - just one kind. As a concrete example, an LLM trained to generate code can learn from the output of the generated code. Such output is, in some sense, exogenous.
@Quantensalat @devsimsek For something more formal on this subject see
https://arxiv.org/abs/2601.03220
The abstract starts "Can we learn more from data than existed in the generating process itself?"
-
Researchers just mathematically proved that AI can't recursively self-improve its way to superintelligence.
Not "we think it's unlikely." Not "it seems hard." Formally proved.
The model doesn't climb toward AGI — it slowly forgets what reality looks like. They call it model collapse. The math calls it inevitable.
I wrote about it
https://smsk.dev/2026/04/26/ai-cannot-self-improve-and-math-behind-proves-it/
@devsimsek “slowly forgets what reality looks like.” Sort of like billionaires.
-
Researchers just mathematically proved that AI can't recursively self-improve its way to superintelligence.
Not "we think it's unlikely." Not "it seems hard." Formally proved.
The model doesn't climb toward AGI — it slowly forgets what reality looks like. They call it model collapse. The math calls it inevitable.
I wrote about it
https://smsk.dev/2026/04/26/ai-cannot-self-improve-and-math-behind-proves-it/
@devsimsek The existence of humans disprove the paper.
-
Researchers just mathematically proved that AI can't recursively self-improve its way to superintelligence.
Not "we think it's unlikely." Not "it seems hard." Formally proved.
The model doesn't climb toward AGI — it slowly forgets what reality looks like. They call it model collapse. The math calls it inevitable.
I wrote about it
https://smsk.dev/2026/04/26/ai-cannot-self-improve-and-math-behind-proves-it/
@devsimsek did an LLM write this toot or do LLMs just write like you

-
Researchers just mathematically proved that AI can't recursively self-improve its way to superintelligence.
Not "we think it's unlikely." Not "it seems hard." Formally proved.
The model doesn't climb toward AGI — it slowly forgets what reality looks like. They call it model collapse. The math calls it inevitable.
I wrote about it
https://smsk.dev/2026/04/26/ai-cannot-self-improve-and-math-behind-proves-it/
@devsimsek "Don't worry bro, we can totally fix this by adding a committee of expert LLMs to reason about what training data to select, another committee of LLMs to plan the optimal training order, and then a larger one to evaluate the training output. We just need you to sign this cheque for our next three hyperscale GPU data centres..."
-
@dpiponi @Quantensalat @devsimsek that part, that is ultimately a rehash of well-known theory. THAT part IIRC goes back to like the 1940's or 1950's.
And it absolutely rules out all forms of 'self-training.' It is not just mathematically impossible but a total logical fallacy. How can a system with no reference make correct determinations? Simple: it can't.
@rootwyrm @dpiponi @Quantensalat @devsimsek
"How can a system with no reference make correct determinations? Simple: it can't."
Especially since it has no model of "correctness" other than "similar to the symbol streams the neural net weights were initialized from".
-
@devsimsek The existence of humans disprove the paper.
Large language models are fundamentally different from mammals on every level. They do not build models or reason about them. A rat is more "intelligent".
-
@devsimsek and this is old math, old theory, old knowledge. Gods do I wish I'd kept the various papers.
We've literally known for over two decades that LLMs are dead-ends. It's why IBM spent billions hyper-focusing Watson. We already knew more context just made it worse, regardless of compute or method. It's not 'intelligence,' it's a bad search function. There's shit demonstrating that back to the 1980's.
Mark V. Shaney.
-
@devsimsek Is that a thing people believe, that LLMs generate themselves towards the singularity simply by eating their own output and no other feedback?
@Quantensalat @devsimsek the main issue is that unless you maintain an external signal (so human input in the form of token sequences that are actually carefully curated for coherence) the models become more and more incoherent. Sounds like you're on board with that. The next step is that we're quickly devaluing money spent on human creativity and the world is awash in LLM garbage. So the human signal *is* disappearing.
-
@musicman @devsimsek As with all mathematical theorems, there's probably a not too far-fetched loophole circumventing some of their assumptions, doesn't mean skynet is becoming self-aware any time soon once that is the case.
@Quantensalat @musicman @devsimsek depends on what you mean by far fetched, certainly nothing as easy as "their more compute at it' which is what made this jump in investment so dramatic.