<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Researchers just mathematically proved that AI can&#x27;t recursively self-improve its way to superintelligence.]]></title><description><![CDATA[<p>Researchers just mathematically proved that AI can't recursively self-improve its way to superintelligence.</p><p>Not "we think it's unlikely." Not "it seems hard." Formally proved.</p><p>The model doesn't climb toward AGI — it slowly forgets what reality looks like. They call it model collapse. The math calls it inevitable.<br />I wrote about it <img src="https://forum.fedi.dk/assets/plugins/nodebb-plugin-emoji/emoji/android/1f447.png?v=7979fdcf9c7" class="not-responsive emoji emoji-android emoji--point_down" style="height:23px;width:auto;vertical-align:middle" title="👇" alt="👇" /></p><p><a href="https://smsk.dev/2026/04/26/ai-cannot-self-improve-and-math-behind-proves-it/" rel="nofollow noopener"><span>https://</span><span>smsk.dev/2026/04/26/ai-cannot-</span><span>self-improve-and-math-behind-proves-it/</span></a></p><p><a href="https://universeodon.com/tags/AI" rel="tag">#<span>AI</span></a> <a href="https://universeodon.com/tags/MachineLearning" rel="tag">#<span>MachineLearning</span></a> <a href="https://universeodon.com/tags/LLM" rel="tag">#<span>LLM</span></a> <a href="https://universeodon.com/tags/Research" rel="tag">#<span>Research</span></a></p>]]></description><link>https://forum.fedi.dk/topic/c2170966-8d11-4af7-8a46-d3e1d37402a8/researchers-just-mathematically-proved-that-ai-can-t-recursively-self-improve-its-way-to-superintelligence.</link><generator>RSS for Node</generator><lastBuildDate>Thu, 30 Apr 2026 08:31:32 GMT</lastBuildDate><atom:link href="https://forum.fedi.dk/topic/c2170966-8d11-4af7-8a46-d3e1d37402a8.rss" rel="self" type="application/rss+xml"/><pubDate>Sun, 26 Apr 2026 19:01:57 GMT</pubDate><ttl>60</ttl><item><title><![CDATA[Reply to Researchers just mathematically proved that AI can&#x27;t recursively self-improve its way to superintelligence. on Tue, 28 Apr 2026 04:46:06 GMT]]></title><description><![CDATA[<p><span><a href="/user/wronglang%40bayes.club">@<span>wronglang</span></a></span> <span><a href="/user/devsimsek%40universeodon.com">@<span>devsimsek</span></a></span> Yes, sure. I mean I can imagine it improving somewhat still, like when you augment your training set for image recognition by adding noise to a smaller set, but only to a point before it goes downhill from feedback.</p><p>No, my gut feeling is rather that there have to be much more effective ways to train a model than to brute force funnel billions of pages of text to a transformer which  blindly fits relations between words and structures without understanding them, that seems like doing it the hard way, even if I'm not expert enough to tell you what an alternative would look like</p>]]></description><link>https://forum.fedi.dk/post/https://scicomm.xyz/users/Quantensalat/statuses/116480512271160697</link><guid isPermaLink="true">https://forum.fedi.dk/post/https://scicomm.xyz/users/Quantensalat/statuses/116480512271160697</guid><dc:creator><![CDATA[quantensalat@scicomm.xyz]]></dc:creator><pubDate>Tue, 28 Apr 2026 04:46:06 GMT</pubDate></item><item><title><![CDATA[Reply to Researchers just mathematically proved that AI can&#x27;t recursively self-improve its way to superintelligence. on Tue, 28 Apr 2026 04:38:34 GMT]]></title><description><![CDATA[<p><span><a href="/user/resuna%40ohai.social">@<span>resuna</span></a></span> </p><p>Everything in your post was wrong - so why did you post it?</p><p><span><a href="/user/devsimsek%40universeodon.com">@<span>devsimsek</span></a></span></p>]]></description><link>https://forum.fedi.dk/post/https://masto.sangberg.se/users/troed/statuses/116480482657478002</link><guid isPermaLink="true">https://forum.fedi.dk/post/https://masto.sangberg.se/users/troed/statuses/116480482657478002</guid><dc:creator><![CDATA[troed@masto.sangberg.se]]></dc:creator><pubDate>Tue, 28 Apr 2026 04:38:34 GMT</pubDate></item><item><title><![CDATA[Reply to Researchers just mathematically proved that AI can&#x27;t recursively self-improve its way to superintelligence. on Tue, 28 Apr 2026 04:38:15 GMT]]></title><description><![CDATA[<p><span><a href="/user/devsimsek%40universeodon.com">@<span>devsimsek</span></a></span> <br />"Touch grass." It is not just a reminder to take a break or get some fresh air.</p>]]></description><link>https://forum.fedi.dk/post/https://mstdn.social/users/Urban_Hermit/statuses/116480481388696789</link><guid isPermaLink="true">https://forum.fedi.dk/post/https://mstdn.social/users/Urban_Hermit/statuses/116480481388696789</guid><dc:creator><![CDATA[urban_hermit@mstdn.social]]></dc:creator><pubDate>Tue, 28 Apr 2026 04:38:15 GMT</pubDate></item><item><title><![CDATA[Reply to Researchers just mathematically proved that AI can&#x27;t recursively self-improve its way to superintelligence. on Tue, 28 Apr 2026 04:35:45 GMT]]></title><description><![CDATA[<p><span><a href="/user/wronglang%40bayes.club">@<span>wronglang</span></a></span> <span><a href="/user/musicman%40mastodon.social">@<span>musicman</span></a></span> <span><a href="/user/devsimsek%40universeodon.com">@<span>devsimsek</span></a></span> No, agreed, more compute with the same type of model and the same training data sounds totally unplausible to me as a long term strategy</p>]]></description><link>https://forum.fedi.dk/post/https://scicomm.xyz/users/Quantensalat/statuses/116480471547324822</link><guid isPermaLink="true">https://forum.fedi.dk/post/https://scicomm.xyz/users/Quantensalat/statuses/116480471547324822</guid><dc:creator><![CDATA[quantensalat@scicomm.xyz]]></dc:creator><pubDate>Tue, 28 Apr 2026 04:35:45 GMT</pubDate></item><item><title><![CDATA[Reply to Researchers just mathematically proved that AI can&#x27;t recursively self-improve its way to superintelligence. on Tue, 28 Apr 2026 04:19:28 GMT]]></title><description><![CDATA[<p><span><a href="/user/devsimsek%40universeodon.com">@<span>devsimsek</span></a></span> you have an awkward sentence here you might want to know about: “Even though I like to say yes, i neither have the enough research nor I want to comment on it”</p><p>I think you’re going for something like “even though I’d like to say yes, I have neither enough research nor any desire to comment on it”… but I’m not entirely sure.</p>]]></description><link>https://forum.fedi.dk/post/https://masto.hackers.town/users/calcifer/statuses/116480407532283035</link><guid isPermaLink="true">https://forum.fedi.dk/post/https://masto.hackers.town/users/calcifer/statuses/116480407532283035</guid><dc:creator><![CDATA[calcifer@masto.hackers.town]]></dc:creator><pubDate>Tue, 28 Apr 2026 04:19:28 GMT</pubDate></item><item><title><![CDATA[Reply to Researchers just mathematically proved that AI can&#x27;t recursively self-improve its way to superintelligence. on Tue, 28 Apr 2026 04:11:12 GMT]]></title><description><![CDATA[<p><span><a href="/user/devsimsek%40universeodon.com">@<span>devsimsek</span></a></span> This &amp; overall the bigger issue of forced overinclusion &amp; attempted hyperteliance on machine learning systems, mostly done by governments &amp; their private partners,  like autoshutoff on cars, chatbots as talk therapists&amp; biometric ID/digital  ID instead of regular ID card systems, is destined to fail.... It's not so much that activists will win in court or public protests on how these things at least mostly violate civil liberties &amp; are based on data &amp; intellectual  property theft.... It's that fundamentally none of these systems actually work! </p><p>They couldn't even write a specific mechanism or method for the vehicle one because nothing  fitting the mandate has been developed &amp; the nearest ones obviously dont work.</p>]]></description><link>https://forum.fedi.dk/post/https://regenerate.social/users/BrahmaBelarusian/statuses/116480375020792629</link><guid isPermaLink="true">https://forum.fedi.dk/post/https://regenerate.social/users/BrahmaBelarusian/statuses/116480375020792629</guid><dc:creator><![CDATA[brahmabelarusian@regenerate.social]]></dc:creator><pubDate>Tue, 28 Apr 2026 04:11:12 GMT</pubDate></item><item><title><![CDATA[Reply to Researchers just mathematically proved that AI can&#x27;t recursively self-improve its way to superintelligence. on Tue, 28 Apr 2026 03:07:03 GMT]]></title><description><![CDATA[<p><span><a href="/user/devsimsek%40universeodon.com">@<span>devsimsek</span></a></span> I'd be interested to see the same analysis of human consciousness. It is well understood that complexity is a regime on the absolute edge of chaos.</p>]]></description><link>https://forum.fedi.dk/post/https://beige.party/users/onekind/statuses/116480122765282050</link><guid isPermaLink="true">https://forum.fedi.dk/post/https://beige.party/users/onekind/statuses/116480122765282050</guid><dc:creator><![CDATA[onekind@beige.party]]></dc:creator><pubDate>Tue, 28 Apr 2026 03:07:03 GMT</pubDate></item><item><title><![CDATA[Reply to Researchers just mathematically proved that AI can&#x27;t recursively self-improve its way to superintelligence. on Tue, 28 Apr 2026 02:15:00 GMT]]></title><description><![CDATA[<p><span><a href="https://theblower.au/@anne_twain">@<span>anne_twain</span></a></span> <span><a href="/user/devsimsek%40universeodon.com">@<span>devsimsek</span></a></span> a better equivalence explanation.</p><p>Here is a 'smart hammer.' It promises to never smash your thumb. And between 20 and 60% of the time, it works! The other 80 to 40% of the time it explodes and takes off your entire arm and sets the nearest three houses on fire.</p><p>The question is not "why are people not stopping when it explodes" or "how do we filter the explosions."<br />The question is "WHY THE FUCK ARE PEOPLE STILL USING AN EXPLODING HAMMER?!"</p><p>I need to remember this one.</p>]]></description><link>https://forum.fedi.dk/post/https://weird.autos/users/rootwyrm/statuses/116479918131282568</link><guid isPermaLink="true">https://forum.fedi.dk/post/https://weird.autos/users/rootwyrm/statuses/116479918131282568</guid><dc:creator><![CDATA[rootwyrm@weird.autos]]></dc:creator><pubDate>Tue, 28 Apr 2026 02:15:00 GMT</pubDate></item><item><title><![CDATA[Reply to Researchers just mathematically proved that AI can&#x27;t recursively self-improve its way to superintelligence. on Tue, 28 Apr 2026 02:08:34 GMT]]></title><description><![CDATA[<p><span><a href="https://theblower.au/@anne_twain">@<span>anne_twain</span></a></span> <span><a href="/user/devsimsek%40universeodon.com">@<span>devsimsek</span></a></span> there is no process. There is no intelligence. There never was and there never will be. <br />It's a bad stochastic parrot written by children who should have been flunked out of 7th grade math and 3rd grade English as illiterate. Used and pushed by people who aren't capable of reviewing a fast food order, or even placing one.</p><p>And guess what? All irrelevant because it takes an incomprehensible level of stupidity to even use a tool that fails dangerously constantly.</p>]]></description><link>https://forum.fedi.dk/post/https://weird.autos/users/rootwyrm/statuses/116479892799809830</link><guid isPermaLink="true">https://forum.fedi.dk/post/https://weird.autos/users/rootwyrm/statuses/116479892799809830</guid><dc:creator><![CDATA[rootwyrm@weird.autos]]></dc:creator><pubDate>Tue, 28 Apr 2026 02:08:34 GMT</pubDate></item><item><title><![CDATA[Reply to Researchers just mathematically proved that AI can&#x27;t recursively self-improve its way to superintelligence. on Tue, 28 Apr 2026 00:51:14 GMT]]></title><description><![CDATA[<p><span><a href="/user/musicman%40mastodon.social">@<span>musicman</span></a></span> <span><a href="/user/quantensalat%40scicomm.xyz">@<span>Quantensalat</span></a></span> <span><a href="/user/devsimsek%40universeodon.com">@<span>devsimsek</span></a></span> Anyone who ever copied an audio tape (or worse a VHS tape) knows that the copy is always worse than the original. And in the video case, soon unwatchable.</p><p>Ever heard a repeating echo on a video meeting that just turns to a buzz? Same phenomenon.</p><p>So what you need is an AI that can perform experiments in the real world to learn how to do better whatever it is you want it to do.</p><p>Inbreeding animals doesn't work too well either.</p>]]></description><link>https://forum.fedi.dk/post/https://noc.social/users/mike805/statuses/116479588755169990</link><guid isPermaLink="true">https://forum.fedi.dk/post/https://noc.social/users/mike805/statuses/116479588755169990</guid><dc:creator><![CDATA[mike805@noc.social]]></dc:creator><pubDate>Tue, 28 Apr 2026 00:51:14 GMT</pubDate></item><item><title><![CDATA[Reply to Researchers just mathematically proved that AI can&#x27;t recursively self-improve its way to superintelligence. on Mon, 27 Apr 2026 23:26:55 GMT]]></title><description><![CDATA[<p><span><a href="/user/devsimsek%40universeodon.com">@<span>devsimsek</span></a></span> so it doesn't get stuck in a local optimum, it hill-climbs a non-existent one? <img class="not-responsive emoji" src="https://emmah02.files.fedi.monster/custom_emojis/images/000/024/723/original/9e2d40917e208237.png" title=":sicko_yes:" /></p>]]></description><link>https://forum.fedi.dk/post/https://orbital.horse/users/emma/statuses/116479257153832429</link><guid isPermaLink="true">https://forum.fedi.dk/post/https://orbital.horse/users/emma/statuses/116479257153832429</guid><dc:creator><![CDATA[emma@orbital.horse]]></dc:creator><pubDate>Mon, 27 Apr 2026 23:26:55 GMT</pubDate></item><item><title><![CDATA[Reply to Researchers just mathematically proved that AI can&#x27;t recursively self-improve its way to superintelligence. on Mon, 27 Apr 2026 23:10:42 GMT]]></title><description><![CDATA[<p><span><a href="/user/quantensalat%40scicomm.xyz">@<span>Quantensalat</span></a></span> <span><a href="/user/musicman%40mastodon.social">@<span>musicman</span></a></span> <span><a href="/user/devsimsek%40universeodon.com">@<span>devsimsek</span></a></span> depends on what you mean by far fetched, certainly nothing as easy as "their more compute at it' which is what made this jump in investment so dramatic.</p>]]></description><link>https://forum.fedi.dk/post/https://bayes.club/users/wronglang/statuses/116479193409718414</link><guid isPermaLink="true">https://forum.fedi.dk/post/https://bayes.club/users/wronglang/statuses/116479193409718414</guid><dc:creator><![CDATA[wronglang@bayes.club]]></dc:creator><pubDate>Mon, 27 Apr 2026 23:10:42 GMT</pubDate></item><item><title><![CDATA[Reply to Researchers just mathematically proved that AI can&#x27;t recursively self-improve its way to superintelligence. on Mon, 27 Apr 2026 23:08:18 GMT]]></title><description><![CDATA[<p><span><a href="/user/quantensalat%40scicomm.xyz">@<span>Quantensalat</span></a></span> <span><a href="/user/devsimsek%40universeodon.com">@<span>devsimsek</span></a></span> the main issue is that unless you maintain an external signal (so human input in the form of token sequences that are actually carefully curated for coherence) the models become more and more incoherent. Sounds like you're on board with that. The next step is that we're quickly devaluing money spent on human creativity and the world is awash in LLM garbage. So the human signal *is* disappearing.</p>]]></description><link>https://forum.fedi.dk/post/https://bayes.club/users/wronglang/statuses/116479183946938332</link><guid isPermaLink="true">https://forum.fedi.dk/post/https://bayes.club/users/wronglang/statuses/116479183946938332</guid><dc:creator><![CDATA[wronglang@bayes.club]]></dc:creator><pubDate>Mon, 27 Apr 2026 23:08:18 GMT</pubDate></item><item><title><![CDATA[Reply to Researchers just mathematically proved that AI can&#x27;t recursively self-improve its way to superintelligence. on Mon, 27 Apr 2026 22:59:45 GMT]]></title><description><![CDATA[<p><span><a href="/user/rootwyrm%40weird.autos">@<span>rootwyrm</span></a></span> <span><a href="/user/devsimsek%40universeodon.com">@<span>devsimsek</span></a></span> </p><p>Mark V. Shaney.</p>]]></description><link>https://forum.fedi.dk/post/https://ohai.social/users/resuna/statuses/116479150353242150</link><guid isPermaLink="true">https://forum.fedi.dk/post/https://ohai.social/users/resuna/statuses/116479150353242150</guid><dc:creator><![CDATA[resuna@ohai.social]]></dc:creator><pubDate>Mon, 27 Apr 2026 22:59:45 GMT</pubDate></item><item><title><![CDATA[Reply to Researchers just mathematically proved that AI can&#x27;t recursively self-improve its way to superintelligence. on Mon, 27 Apr 2026 22:59:19 GMT]]></title><description><![CDATA[<p><span><a href="/user/troed%40masto.sangberg.se">@<span>troed</span></a></span> <span><a href="/user/devsimsek%40universeodon.com">@<span>devsimsek</span></a></span> </p><p>Large language models are fundamentally different from mammals on every level. They do not build models or reason about them. A rat is more "intelligent".</p>]]></description><link>https://forum.fedi.dk/post/https://ohai.social/users/resuna/statuses/116479148658433477</link><guid isPermaLink="true">https://forum.fedi.dk/post/https://ohai.social/users/resuna/statuses/116479148658433477</guid><dc:creator><![CDATA[resuna@ohai.social]]></dc:creator><pubDate>Mon, 27 Apr 2026 22:59:19 GMT</pubDate></item><item><title><![CDATA[Reply to Researchers just mathematically proved that AI can&#x27;t recursively self-improve its way to superintelligence. on Mon, 27 Apr 2026 22:57:51 GMT]]></title><description><![CDATA[<p><span><a href="/user/rootwyrm%40weird.autos">@<span>rootwyrm</span></a></span> <span><a href="/user/dpiponi%40mathstodon.xyz">@<span>dpiponi</span></a></span> <span><a href="/user/quantensalat%40scicomm.xyz">@<span>Quantensalat</span></a></span> <span><a href="/user/devsimsek%40universeodon.com">@<span>devsimsek</span></a></span> </p><p>"How can a system with no reference make correct determinations? Simple: it can't."</p><p>Especially since it has no model of "correctness" other than "similar to the symbol streams the neural net weights were initialized from".</p>]]></description><link>https://forum.fedi.dk/post/https://ohai.social/users/resuna/statuses/116479142865769034</link><guid isPermaLink="true">https://forum.fedi.dk/post/https://ohai.social/users/resuna/statuses/116479142865769034</guid><dc:creator><![CDATA[resuna@ohai.social]]></dc:creator><pubDate>Mon, 27 Apr 2026 22:57:51 GMT</pubDate></item><item><title><![CDATA[Reply to Researchers just mathematically proved that AI can&#x27;t recursively self-improve its way to superintelligence. on Mon, 27 Apr 2026 22:45:28 GMT]]></title><description><![CDATA[<p><span><a href="/user/devsimsek%40universeodon.com">@<span>devsimsek</span></a></span> "Don't worry bro, we can totally fix this by adding a committee of expert LLMs to reason about what training data to select, another committee of LLMs to plan the optimal training order, and then a larger one to evaluate the training output. We just need you to sign this cheque for our next three hyperscale GPU data centres..."</p>]]></description><link>https://forum.fedi.dk/post/https://lgbtqia.space/users/anyia/statuses/116479094186757657</link><guid isPermaLink="true">https://forum.fedi.dk/post/https://lgbtqia.space/users/anyia/statuses/116479094186757657</guid><dc:creator><![CDATA[anyia@lgbtqia.space]]></dc:creator><pubDate>Mon, 27 Apr 2026 22:45:28 GMT</pubDate></item><item><title><![CDATA[Reply to Researchers just mathematically proved that AI can&#x27;t recursively self-improve its way to superintelligence. on Mon, 27 Apr 2026 21:28:13 GMT]]></title><description><![CDATA[<p><span><a href="/user/devsimsek%40universeodon.com">@<span>devsimsek</span></a></span> did an LLM write this toot or do LLMs just write like you <img src="https://forum.fedi.dk/assets/plugins/nodebb-plugin-emoji/emoji/android/1f605.png?v=7979fdcf9c7" class="not-responsive emoji emoji-android emoji--sweat_smile" style="height:23px;width:auto;vertical-align:middle" title="😅" alt="😅" /></p>]]></description><link>https://forum.fedi.dk/post/https://hachyderm.io/users/aburka/statuses/116478790420171383</link><guid isPermaLink="true">https://forum.fedi.dk/post/https://hachyderm.io/users/aburka/statuses/116478790420171383</guid><dc:creator><![CDATA[aburka@hachyderm.io]]></dc:creator><pubDate>Mon, 27 Apr 2026 21:28:13 GMT</pubDate></item><item><title><![CDATA[Reply to Researchers just mathematically proved that AI can&#x27;t recursively self-improve its way to superintelligence. on Mon, 27 Apr 2026 21:18:01 GMT]]></title><description><![CDATA[<p><span><a href="/user/devsimsek%40universeodon.com">@<span>devsimsek</span></a></span> The existence of humans disprove the paper.</p>]]></description><link>https://forum.fedi.dk/post/https://masto.sangberg.se/users/troed/statuses/116478750314527085</link><guid isPermaLink="true">https://forum.fedi.dk/post/https://masto.sangberg.se/users/troed/statuses/116478750314527085</guid><dc:creator><![CDATA[troed@masto.sangberg.se]]></dc:creator><pubDate>Mon, 27 Apr 2026 21:18:01 GMT</pubDate></item><item><title><![CDATA[Reply to Researchers just mathematically proved that AI can&#x27;t recursively self-improve its way to superintelligence. on Mon, 27 Apr 2026 20:44:36 GMT]]></title><description><![CDATA[<p><span><a href="/user/devsimsek%40universeodon.com">@<span>devsimsek</span></a></span> “slowly forgets what reality looks like.” Sort of like billionaires.</p>]]></description><link>https://forum.fedi.dk/post/https://toot.boston/users/rednikki/statuses/116478618913212301</link><guid isPermaLink="true">https://forum.fedi.dk/post/https://toot.boston/users/rednikki/statuses/116478618913212301</guid><dc:creator><![CDATA[rednikki@toot.boston]]></dc:creator><pubDate>Mon, 27 Apr 2026 20:44:36 GMT</pubDate></item><item><title><![CDATA[Reply to Researchers just mathematically proved that AI can&#x27;t recursively self-improve its way to superintelligence. on Mon, 27 Apr 2026 20:16:38 GMT]]></title><description><![CDATA[<p><span><a href="/user/quantensalat%40scicomm.xyz">@<span>Quantensalat</span></a></span> <span><a href="/user/devsimsek%40universeodon.com">@<span>devsimsek</span></a></span> For something more formal on this subject see</p><p><a href="https://arxiv.org/abs/2601.03220" rel="nofollow noopener"><span>https://</span><span>arxiv.org/abs/2601.03220</span><span></span></a></p><p>The abstract starts "Can we learn more from data than existed in the generating process itself?"</p>]]></description><link>https://forum.fedi.dk/post/https://mathstodon.xyz/users/dpiponi/statuses/116478508973040566</link><guid isPermaLink="true">https://forum.fedi.dk/post/https://mathstodon.xyz/users/dpiponi/statuses/116478508973040566</guid><dc:creator><![CDATA[dpiponi@mathstodon.xyz]]></dc:creator><pubDate>Mon, 27 Apr 2026 20:16:38 GMT</pubDate></item><item><title><![CDATA[Reply to Researchers just mathematically proved that AI can&#x27;t recursively self-improve its way to superintelligence. on Mon, 27 Apr 2026 20:11:28 GMT]]></title><description><![CDATA[<p><span><a href="/user/devsimsek%40universeodon.com">@<span>devsimsek</span></a></span> <span><a href="/user/qualia%40floofy.tech">@<span>qualia</span></a></span> I think you claim too much here. As I understand it, this result deals only with the intrinsic failures of RL-flavored approaches and not things like self-play, let alone problems that might arise from merely very good AI that still outdoes humans economically.</p><p>And I largely agree! I'm glad that someone's finally formalized the intuition that synthetic data is sawdust to bulk out real-world data with and more carefully investigated catastrophic forgetting and the general weaknesses of gradient descent.</p><p>That said... to what extent did you have Claude write this post? Because the format is... distinctive.</p>]]></description><link>https://forum.fedi.dk/post/https://yiff.life/users/lorxus/statuses/116478488641491984</link><guid isPermaLink="true">https://forum.fedi.dk/post/https://yiff.life/users/lorxus/statuses/116478488641491984</guid><dc:creator><![CDATA[lorxus@yiff.life]]></dc:creator><pubDate>Mon, 27 Apr 2026 20:11:28 GMT</pubDate></item><item><title><![CDATA[Reply to Researchers just mathematically proved that AI can&#x27;t recursively self-improve its way to superintelligence. on Mon, 27 Apr 2026 19:43:25 GMT]]></title><description><![CDATA[<p><span><a href="/user/devsimsek%40universeodon.com">@<span>devsimsek</span></a></span><span> isn't the idea of self-improving AI that the AI modifies its code, so the underlying algorithm / architecture?</span></p>]]></description><link>https://forum.fedi.dk/post/https://yuustan.space/notes/all65r4zp2lfn97v</link><guid isPermaLink="true">https://forum.fedi.dk/post/https://yuustan.space/notes/all65r4zp2lfn97v</guid><dc:creator><![CDATA[hermlon@yuustan.space]]></dc:creator><pubDate>Mon, 27 Apr 2026 19:43:25 GMT</pubDate></item><item><title><![CDATA[Reply to Researchers just mathematically proved that AI can&#x27;t recursively self-improve its way to superintelligence. on Mon, 27 Apr 2026 19:33:51 GMT]]></title><description><![CDATA[<p><span><a href="/user/devsimsek%40universeodon.com" rel="nofollow noreferrer noopener">@<span>devsimsek</span></a></span> excellent. Thanks for the overview!</p>]]></description><link>https://forum.fedi.dk/post/https://notnull.space/users/paul/statuses/01KQ86Y2RSR3XDNE2DMTZ8WD1K</link><guid isPermaLink="true">https://forum.fedi.dk/post/https://notnull.space/users/paul/statuses/01KQ86Y2RSR3XDNE2DMTZ8WD1K</guid><dc:creator><![CDATA[paul@notnull.space]]></dc:creator><pubDate>Mon, 27 Apr 2026 19:33:51 GMT</pubDate></item></channel></rss>