Researchers just mathematically proved that AI can't recursively self-improve its way to superintelligence.
-
@devsimsek The good news about this, when stuff explodes a lot idiots will loose a lot money and prove even further how stupid this bubble is.
@Keldrim @devsimsek But we'll still be out of jobs.
-
@Quantensalat @devsimsek tech bros have been claiming their AIs are alive for years so if the average person who knows nothing about computers thinks we already have AGI, who can really blame them. Anthropic all but claims to have invented Terminator.
Maybe something like this will stop the panic.
Which is not to say people shouldn't be concerned in general and very specifically about environmental impacts
@musicman @devsimsek As with all mathematical theorems, there's probably a not too far-fetched loophole circumventing some of their assumptions, doesn't mean skynet is becoming self-aware any time soon once that is the case.
-
Researchers just mathematically proved that AI can't recursively self-improve its way to superintelligence.
Not "we think it's unlikely." Not "it seems hard." Formally proved.
The model doesn't climb toward AGI — it slowly forgets what reality looks like. They call it model collapse. The math calls it inevitable.
I wrote about it
https://smsk.dev/2026/04/26/ai-cannot-self-improve-and-math-behind-proves-it/
@devsimsek Inbreeding is never a good idea that seems quite intuitive doesn’t it?
-
Researchers just mathematically proved that AI can't recursively self-improve its way to superintelligence.
Not "we think it's unlikely." Not "it seems hard." Formally proved.
The model doesn't climb toward AGI — it slowly forgets what reality looks like. They call it model collapse. The math calls it inevitable.
I wrote about it
https://smsk.dev/2026/04/26/ai-cannot-self-improve-and-math-behind-proves-it/
@devsimsek The paper doesn't prove that. It proves that "if the proportion of exogenous, externally grounded signal vanishes asymptotically, the system undergoes degenerative dynamics."
The necessary asymptotic condition is not met in real use. -
Researchers just mathematically proved that AI can't recursively self-improve its way to superintelligence.
Not "we think it's unlikely." Not "it seems hard." Formally proved.
The model doesn't climb toward AGI — it slowly forgets what reality looks like. They call it model collapse. The math calls it inevitable.
I wrote about it
https://smsk.dev/2026/04/26/ai-cannot-self-improve-and-math-behind-proves-it/
@devsimsek this only means that LLMs can't provide their own training data, right? Could they still "invent" new algorithms, that make more of the existing data?
-
Researchers just mathematically proved that AI can't recursively self-improve its way to superintelligence.
Not "we think it's unlikely." Not "it seems hard." Formally proved.
The model doesn't climb toward AGI — it slowly forgets what reality looks like. They call it model collapse. The math calls it inevitable.
I wrote about it
https://smsk.dev/2026/04/26/ai-cannot-self-improve-and-math-behind-proves-it/
@devsimsek Chatting with U Toronto AI profs 6, 7 years ago, I posed a problem.
"Teach your AI everything about whole, integer, rational and real numbers. Ask it to solve a problem that requires it to invent complex numbers."
Reply: "Oh... It doesn't work that way."
I knew that, but the ability to frame your observations as the product of a higher order system is IMHO key to what we call "intelligence". Collecting evidence that can disprove your hypothesis is science.
LLM approaches are neither, in a very expensive way.
I'll have to read the paper, though. I'm looking forward to the AI equivalent of Goedel's Theorem that shuts down this annoying iteration of the field.
-
Researchers just mathematically proved that AI can't recursively self-improve its way to superintelligence.
Not "we think it's unlikely." Not "it seems hard." Formally proved.
The model doesn't climb toward AGI — it slowly forgets what reality looks like. They call it model collapse. The math calls it inevitable.
I wrote about it
https://smsk.dev/2026/04/26/ai-cannot-self-improve-and-math-behind-proves-it/
"The curse of recursion" or, as I've been calling it for a while now, "a feedback loop of shit."
-
Researchers just mathematically proved that AI can't recursively self-improve its way to superintelligence.
Not "we think it's unlikely." Not "it seems hard." Formally proved.
The model doesn't climb toward AGI — it slowly forgets what reality looks like. They call it model collapse. The math calls it inevitable.
I wrote about it
https://smsk.dev/2026/04/26/ai-cannot-self-improve-and-math-behind-proves-it/
@devsimsek
This is great. I’ve been saying same since before it was conceived, but I expected it on the heels of Cambridge Analytica scandal & techbros desire to use it as a Maxwell’s Demon. If these AI developers cared about their product, they would be funding & not cutting research, sciences, the arts, quality free education, ensuring diversity of experience & insight. But they are going out of their way to destroy their own models with falsehoods of every kind.
They & It lack discernment. -
Researchers just mathematically proved that AI can't recursively self-improve its way to superintelligence.
Not "we think it's unlikely." Not "it seems hard." Formally proved.
The model doesn't climb toward AGI — it slowly forgets what reality looks like. They call it model collapse. The math calls it inevitable.
I wrote about it
https://smsk.dev/2026/04/26/ai-cannot-self-improve-and-math-behind-proves-it/
@devsimsek What should we trust, then? Researchers, or LinkedIn Unemployed AI Ambassadors?
-
Researchers just mathematically proved that AI can't recursively self-improve its way to superintelligence.
Not "we think it's unlikely." Not "it seems hard." Formally proved.
The model doesn't climb toward AGI — it slowly forgets what reality looks like. They call it model collapse. The math calls it inevitable.
I wrote about it
https://smsk.dev/2026/04/26/ai-cannot-self-improve-and-math-behind-proves-it/
@devsimsek also see https://berryvilleiml.com/2026/01/10/recursive-pollution-and-model-collapse-are-not-the-same/
This is part of a long running #ML research thread with big #MLsec impact
-
Researchers just mathematically proved that AI can't recursively self-improve its way to superintelligence.
Not "we think it's unlikely." Not "it seems hard." Formally proved.
The model doesn't climb toward AGI — it slowly forgets what reality looks like. They call it model collapse. The math calls it inevitable.
I wrote about it
https://smsk.dev/2026/04/26/ai-cannot-self-improve-and-math-behind-proves-it/
@devsimsek I said this a few years ago, and I am no
Matematician. Simple combinatorics and discrete math
over sets will tell you that. -
Researchers just mathematically proved that AI can't recursively self-improve its way to superintelligence.
Not "we think it's unlikely." Not "it seems hard." Formally proved.
The model doesn't climb toward AGI — it slowly forgets what reality looks like. They call it model collapse. The math calls it inevitable.
I wrote about it
https://smsk.dev/2026/04/26/ai-cannot-self-improve-and-math-behind-proves-it/
@devsimsek I think AGI and self-improvement is possible. But definitely not with the technology (neural LLMs) that is being marketed as "AI" today.
I think that AGI needs to be able to think logically. -
@devsimsek I think AGI and self-improvement is possible. But definitely not with the technology (neural LLMs) that is being marketed as "AI" today.
I think that AGI needs to be able to think logically.@LunaDragofelis
@devsimsek ^ this tbh. The single-minded focus on scaling LLMs is seemingly caused by parts of the AI crowd being hammers that view every problem as a nail.The path to better products will involve many different technologies being glued together.
-
Researchers just mathematically proved that AI can't recursively self-improve its way to superintelligence.
Not "we think it's unlikely." Not "it seems hard." Formally proved.
The model doesn't climb toward AGI — it slowly forgets what reality looks like. They call it model collapse. The math calls it inevitable.
I wrote about it
https://smsk.dev/2026/04/26/ai-cannot-self-improve-and-math-behind-proves-it/
@devsimsek@universeodon.com I don’t think this is the usual formulation of RSI though – in the one I know the input of the AI is not it’s output, but the environment plus (a representation of) itself. So I would say the way the article (and blogpost) formulates its thesis is misleading.
(I used to worry about AGI and the current focus on LLMs stopped that. Not because such a self-improvement loop is impossible (which I don’t expect it to be tbh), but rather because it’s extremely unlikely due to their very low homoiconicity.)
-
Researchers just mathematically proved that AI can't recursively self-improve its way to superintelligence.
Not "we think it's unlikely." Not "it seems hard." Formally proved.
The model doesn't climb toward AGI — it slowly forgets what reality looks like. They call it model collapse. The math calls it inevitable.
I wrote about it
https://smsk.dev/2026/04/26/ai-cannot-self-improve-and-math-behind-proves-it/
@devsimsek An inevitable melting into slop.
Every time you copy something, you lose some detail. Continue long enough and you eventually do get a Singularity. All information compressed to a single "1". -
@devsimsek The paper doesn't prove that. It proves that "if the proportion of exogenous, externally grounded signal vanishes asymptotically, the system undergoes degenerative dynamics."
The necessary asymptotic condition is not met in real use.@thearrivingdeparture @devsimsek
Even if that were true, it would still be in contrast to, say, being able to play zillions of chess games against yourself to become a stronger player, which does work.
-
Researchers just mathematically proved that AI can't recursively self-improve its way to superintelligence.
Not "we think it's unlikely." Not "it seems hard." Formally proved.
The model doesn't climb toward AGI — it slowly forgets what reality looks like. They call it model collapse. The math calls it inevitable.
I wrote about it
https://smsk.dev/2026/04/26/ai-cannot-self-improve-and-math-behind-proves-it/
@devsimsek This was my intuition as soon as I understood that they were fundamentally just a statistical distribution predictive model: of course if you feed the output of the statistics machine back into itself it's going to degrade as a model, that's just how statistical modeling works... But still nice that someone actually "did the math" to prove it though.
What's particularly interesting about this process from what I understand is that in isolation none of the synthetic data looks "wrong" which is what makes it so 'tempting' for the bubble-pumpers desperate for training data. And despite none of it looking that bad, the entire model can easily collapse into an incoherent pile of gibberish with enough of it due to subtle statistical butterflies.
-
Researchers just mathematically proved that AI can't recursively self-improve its way to superintelligence.
Not "we think it's unlikely." Not "it seems hard." Formally proved.
The model doesn't climb toward AGI — it slowly forgets what reality looks like. They call it model collapse. The math calls it inevitable.
I wrote about it
https://smsk.dev/2026/04/26/ai-cannot-self-improve-and-math-behind-proves-it/
@devsimsek sicko-to-sicko communication
-
Researchers just mathematically proved that AI can't recursively self-improve its way to superintelligence.
Not "we think it's unlikely." Not "it seems hard." Formally proved.
The model doesn't climb toward AGI — it slowly forgets what reality looks like. They call it model collapse. The math calls it inevitable.
I wrote about it
https://smsk.dev/2026/04/26/ai-cannot-self-improve-and-math-behind-proves-it/
@devsimsek Compare how cryptographic RNGs are usually Pseudo-RNGs fed with entropy, and which fail to output random-approximate values (of a given strength) once the entropy falls too low.
It's almost as if there is a pattern to this.
-
Researchers just mathematically proved that AI can't recursively self-improve its way to superintelligence.
Not "we think it's unlikely." Not "it seems hard." Formally proved.
The model doesn't climb toward AGI — it slowly forgets what reality looks like. They call it model collapse. The math calls it inevitable.
I wrote about it
https://smsk.dev/2026/04/26/ai-cannot-self-improve-and-math-behind-proves-it/
i'm here for the inevitable model collapse. let's immanentize this bitch!