Skip to content
  • Hjem
  • Seneste
  • Etiketter
  • Populære
  • Verden
  • Bruger
  • Grupper
Temaer
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Kollaps
FARVEL BIG TECH
  1. Forside
  2. Ikke-kategoriseret
  3. Researchers just mathematically proved that AI can't recursively self-improve its way to superintelligence.

Researchers just mathematically proved that AI can't recursively self-improve its way to superintelligence.

Planlagt Fastgjort Låst Flyttet Ikke-kategoriseret
machinelearningllmresearch
86 Indlæg 57 Posters 0 Visninger
  • Ældste til nyeste
  • Nyeste til ældste
  • Most Votes
Svar
  • Svar som emne
Login for at svare
Denne tråd er blevet slettet. Kun brugere med emne behandlings privilegier kan se den.
  • devsimsek@universeodon.comD devsimsek@universeodon.com

    Researchers just mathematically proved that AI can't recursively self-improve its way to superintelligence.

    Not "we think it's unlikely." Not "it seems hard." Formally proved.

    The model doesn't climb toward AGI — it slowly forgets what reality looks like. They call it model collapse. The math calls it inevitable.
    I wrote about it 👇

    https://smsk.dev/2026/04/26/ai-cannot-self-improve-and-math-behind-proves-it/

    #AI #MachineLearning #LLM #Research

    aburka@hachyderm.ioA This user is from outside of this forum
    aburka@hachyderm.ioA This user is from outside of this forum
    aburka@hachyderm.io
    wrote sidst redigeret af
    #69

    @devsimsek did an LLM write this toot or do LLMs just write like you 😅

    1 Reply Last reply
    0
    • devsimsek@universeodon.comD devsimsek@universeodon.com

      Researchers just mathematically proved that AI can't recursively self-improve its way to superintelligence.

      Not "we think it's unlikely." Not "it seems hard." Formally proved.

      The model doesn't climb toward AGI — it slowly forgets what reality looks like. They call it model collapse. The math calls it inevitable.
      I wrote about it 👇

      https://smsk.dev/2026/04/26/ai-cannot-self-improve-and-math-behind-proves-it/

      #AI #MachineLearning #LLM #Research

      anyia@lgbtqia.spaceA This user is from outside of this forum
      anyia@lgbtqia.spaceA This user is from outside of this forum
      anyia@lgbtqia.space
      wrote sidst redigeret af
      #70

      @devsimsek "Don't worry bro, we can totally fix this by adding a committee of expert LLMs to reason about what training data to select, another committee of LLMs to plan the optimal training order, and then a larger one to evaluate the training output. We just need you to sign this cheque for our next three hyperscale GPU data centres..."

      1 Reply Last reply
      0
      • rootwyrm@weird.autosR rootwyrm@weird.autos

        @dpiponi @Quantensalat @devsimsek that part, that is ultimately a rehash of well-known theory. THAT part IIRC goes back to like the 1940's or 1950's.

        And it absolutely rules out all forms of 'self-training.' It is not just mathematically impossible but a total logical fallacy. How can a system with no reference make correct determinations? Simple: it can't.

        resuna@ohai.socialR This user is from outside of this forum
        resuna@ohai.socialR This user is from outside of this forum
        resuna@ohai.social
        wrote sidst redigeret af
        #71

        @rootwyrm @dpiponi @Quantensalat @devsimsek

        "How can a system with no reference make correct determinations? Simple: it can't."

        Especially since it has no model of "correctness" other than "similar to the symbol streams the neural net weights were initialized from".

        1 Reply Last reply
        0
        • troed@masto.sangberg.seT troed@masto.sangberg.se

          @devsimsek The existence of humans disprove the paper.

          resuna@ohai.socialR This user is from outside of this forum
          resuna@ohai.socialR This user is from outside of this forum
          resuna@ohai.social
          wrote sidst redigeret af
          #72

          @troed @devsimsek

          Large language models are fundamentally different from mammals on every level. They do not build models or reason about them. A rat is more "intelligent".

          troed@masto.sangberg.seT 1 Reply Last reply
          0
          • rootwyrm@weird.autosR rootwyrm@weird.autos

            @devsimsek and this is old math, old theory, old knowledge. Gods do I wish I'd kept the various papers.

            We've literally known for over two decades that LLMs are dead-ends. It's why IBM spent billions hyper-focusing Watson. We already knew more context just made it worse, regardless of compute or method. It's not 'intelligence,' it's a bad search function. There's shit demonstrating that back to the 1980's.

            resuna@ohai.socialR This user is from outside of this forum
            resuna@ohai.socialR This user is from outside of this forum
            resuna@ohai.social
            wrote sidst redigeret af
            #73

            @rootwyrm @devsimsek

            Mark V. Shaney.

            1 Reply Last reply
            0
            • quantensalat@scicomm.xyzQ quantensalat@scicomm.xyz

              @devsimsek Is that a thing people believe, that LLMs generate themselves towards the singularity simply by eating their own output and no other feedback?

              wronglang@bayes.clubW This user is from outside of this forum
              wronglang@bayes.clubW This user is from outside of this forum
              wronglang@bayes.club
              wrote sidst redigeret af
              #74

              @Quantensalat @devsimsek the main issue is that unless you maintain an external signal (so human input in the form of token sequences that are actually carefully curated for coherence) the models become more and more incoherent. Sounds like you're on board with that. The next step is that we're quickly devaluing money spent on human creativity and the world is awash in LLM garbage. So the human signal *is* disappearing.

              quantensalat@scicomm.xyzQ 1 Reply Last reply
              0
              • quantensalat@scicomm.xyzQ quantensalat@scicomm.xyz

                @musicman @devsimsek As with all mathematical theorems, there's probably a not too far-fetched loophole circumventing some of their assumptions, doesn't mean skynet is becoming self-aware any time soon once that is the case.

                wronglang@bayes.clubW This user is from outside of this forum
                wronglang@bayes.clubW This user is from outside of this forum
                wronglang@bayes.club
                wrote sidst redigeret af
                #75

                @Quantensalat @musicman @devsimsek depends on what you mean by far fetched, certainly nothing as easy as "their more compute at it' which is what made this jump in investment so dramatic.

                quantensalat@scicomm.xyzQ 1 Reply Last reply
                0
                • devsimsek@universeodon.comD devsimsek@universeodon.com

                  Researchers just mathematically proved that AI can't recursively self-improve its way to superintelligence.

                  Not "we think it's unlikely." Not "it seems hard." Formally proved.

                  The model doesn't climb toward AGI — it slowly forgets what reality looks like. They call it model collapse. The math calls it inevitable.
                  I wrote about it 👇

                  https://smsk.dev/2026/04/26/ai-cannot-self-improve-and-math-behind-proves-it/

                  #AI #MachineLearning #LLM #Research

                  emma@orbital.horseE This user is from outside of this forum
                  emma@orbital.horseE This user is from outside of this forum
                  emma@orbital.horse
                  wrote sidst redigeret af
                  #76

                  @devsimsek so it doesn't get stuck in a local optimum, it hill-climbs a non-existent one?

                  1 Reply Last reply
                  0
                  • musicman@mastodon.socialM musicman@mastodon.social

                    @Quantensalat @devsimsek tech bros have been claiming their AIs are alive for years so if the average person who knows nothing about computers thinks we already have AGI, who can really blame them. Anthropic all but claims to have invented Terminator.

                    Maybe something like this will stop the panic.

                    Which is not to say people shouldn't be concerned in general and very specifically about environmental impacts

                    M This user is from outside of this forum
                    M This user is from outside of this forum
                    mike805@noc.social
                    wrote sidst redigeret af
                    #77

                    @musicman @Quantensalat @devsimsek Anyone who ever copied an audio tape (or worse a VHS tape) knows that the copy is always worse than the original. And in the video case, soon unwatchable.

                    Ever heard a repeating echo on a video meeting that just turns to a buzz? Same phenomenon.

                    So what you need is an AI that can perform experiments in the real world to learn how to do better whatever it is you want it to do.

                    Inbreeding animals doesn't work too well either.

                    1 Reply Last reply
                    0
                    • rootwyrm@weird.autosR This user is from outside of this forum
                      rootwyrm@weird.autosR This user is from outside of this forum
                      rootwyrm@weird.autos
                      wrote sidst redigeret af
                      #78

                      @anne_twain @devsimsek there is no process. There is no intelligence. There never was and there never will be.
                      It's a bad stochastic parrot written by children who should have been flunked out of 7th grade math and 3rd grade English as illiterate. Used and pushed by people who aren't capable of reviewing a fast food order, or even placing one.

                      And guess what? All irrelevant because it takes an incomprehensible level of stupidity to even use a tool that fails dangerously constantly.

                      rootwyrm@weird.autosR 1 Reply Last reply
                      0
                      • rootwyrm@weird.autosR rootwyrm@weird.autos

                        @anne_twain @devsimsek there is no process. There is no intelligence. There never was and there never will be.
                        It's a bad stochastic parrot written by children who should have been flunked out of 7th grade math and 3rd grade English as illiterate. Used and pushed by people who aren't capable of reviewing a fast food order, or even placing one.

                        And guess what? All irrelevant because it takes an incomprehensible level of stupidity to even use a tool that fails dangerously constantly.

                        rootwyrm@weird.autosR This user is from outside of this forum
                        rootwyrm@weird.autosR This user is from outside of this forum
                        rootwyrm@weird.autos
                        wrote sidst redigeret af
                        #79

                        @anne_twain @devsimsek a better equivalence explanation.

                        Here is a 'smart hammer.' It promises to never smash your thumb. And between 20 and 60% of the time, it works! The other 80 to 40% of the time it explodes and takes off your entire arm and sets the nearest three houses on fire.

                        The question is not "why are people not stopping when it explodes" or "how do we filter the explosions."
                        The question is "WHY THE FUCK ARE PEOPLE STILL USING AN EXPLODING HAMMER?!"

                        I need to remember this one.

                        1 Reply Last reply
                        0
                        • devsimsek@universeodon.comD devsimsek@universeodon.com

                          Researchers just mathematically proved that AI can't recursively self-improve its way to superintelligence.

                          Not "we think it's unlikely." Not "it seems hard." Formally proved.

                          The model doesn't climb toward AGI — it slowly forgets what reality looks like. They call it model collapse. The math calls it inevitable.
                          I wrote about it 👇

                          https://smsk.dev/2026/04/26/ai-cannot-self-improve-and-math-behind-proves-it/

                          #AI #MachineLearning #LLM #Research

                          onekind@beige.partyO This user is from outside of this forum
                          onekind@beige.partyO This user is from outside of this forum
                          onekind@beige.party
                          wrote sidst redigeret af
                          #80

                          @devsimsek I'd be interested to see the same analysis of human consciousness. It is well understood that complexity is a regime on the absolute edge of chaos.

                          1 Reply Last reply
                          0
                          • devsimsek@universeodon.comD devsimsek@universeodon.com

                            Researchers just mathematically proved that AI can't recursively self-improve its way to superintelligence.

                            Not "we think it's unlikely." Not "it seems hard." Formally proved.

                            The model doesn't climb toward AGI — it slowly forgets what reality looks like. They call it model collapse. The math calls it inevitable.
                            I wrote about it 👇

                            https://smsk.dev/2026/04/26/ai-cannot-self-improve-and-math-behind-proves-it/

                            #AI #MachineLearning #LLM #Research

                            brahmabelarusian@regenerate.socialB This user is from outside of this forum
                            brahmabelarusian@regenerate.socialB This user is from outside of this forum
                            brahmabelarusian@regenerate.social
                            wrote sidst redigeret af
                            #81

                            @devsimsek This & overall the bigger issue of forced overinclusion & attempted hyperteliance on machine learning systems, mostly done by governments & their private partners, like autoshutoff on cars, chatbots as talk therapists& biometric ID/digital ID instead of regular ID card systems, is destined to fail.... It's not so much that activists will win in court or public protests on how these things at least mostly violate civil liberties & are based on data & intellectual property theft.... It's that fundamentally none of these systems actually work!

                            They couldn't even write a specific mechanism or method for the vehicle one because nothing fitting the mandate has been developed & the nearest ones obviously dont work.

                            1 Reply Last reply
                            0
                            • devsimsek@universeodon.comD devsimsek@universeodon.com

                              Researchers just mathematically proved that AI can't recursively self-improve its way to superintelligence.

                              Not "we think it's unlikely." Not "it seems hard." Formally proved.

                              The model doesn't climb toward AGI — it slowly forgets what reality looks like. They call it model collapse. The math calls it inevitable.
                              I wrote about it 👇

                              https://smsk.dev/2026/04/26/ai-cannot-self-improve-and-math-behind-proves-it/

                              #AI #MachineLearning #LLM #Research

                              calcifer@masto.hackers.townC This user is from outside of this forum
                              calcifer@masto.hackers.townC This user is from outside of this forum
                              calcifer@masto.hackers.town
                              wrote sidst redigeret af
                              #82

                              @devsimsek you have an awkward sentence here you might want to know about: “Even though I like to say yes, i neither have the enough research nor I want to comment on it”

                              I think you’re going for something like “even though I’d like to say yes, I have neither enough research nor any desire to comment on it”… but I’m not entirely sure.

                              1 Reply Last reply
                              0
                              • wronglang@bayes.clubW wronglang@bayes.club

                                @Quantensalat @musicman @devsimsek depends on what you mean by far fetched, certainly nothing as easy as "their more compute at it' which is what made this jump in investment so dramatic.

                                quantensalat@scicomm.xyzQ This user is from outside of this forum
                                quantensalat@scicomm.xyzQ This user is from outside of this forum
                                quantensalat@scicomm.xyz
                                wrote sidst redigeret af
                                #83

                                @wronglang @musicman @devsimsek No, agreed, more compute with the same type of model and the same training data sounds totally unplausible to me as a long term strategy

                                1 Reply Last reply
                                0
                                • devsimsek@universeodon.comD devsimsek@universeodon.com

                                  Researchers just mathematically proved that AI can't recursively self-improve its way to superintelligence.

                                  Not "we think it's unlikely." Not "it seems hard." Formally proved.

                                  The model doesn't climb toward AGI — it slowly forgets what reality looks like. They call it model collapse. The math calls it inevitable.
                                  I wrote about it 👇

                                  https://smsk.dev/2026/04/26/ai-cannot-self-improve-and-math-behind-proves-it/

                                  #AI #MachineLearning #LLM #Research

                                  urban_hermit@mstdn.socialU This user is from outside of this forum
                                  urban_hermit@mstdn.socialU This user is from outside of this forum
                                  urban_hermit@mstdn.social
                                  wrote sidst redigeret af
                                  #84

                                  @devsimsek
                                  "Touch grass." It is not just a reminder to take a break or get some fresh air.

                                  1 Reply Last reply
                                  0
                                  • resuna@ohai.socialR resuna@ohai.social

                                    @troed @devsimsek

                                    Large language models are fundamentally different from mammals on every level. They do not build models or reason about them. A rat is more "intelligent".

                                    troed@masto.sangberg.seT This user is from outside of this forum
                                    troed@masto.sangberg.seT This user is from outside of this forum
                                    troed@masto.sangberg.se
                                    wrote sidst redigeret af
                                    #85

                                    @resuna

                                    Everything in your post was wrong - so why did you post it?

                                    @devsimsek

                                    1 Reply Last reply
                                    0
                                    • wronglang@bayes.clubW wronglang@bayes.club

                                      @Quantensalat @devsimsek the main issue is that unless you maintain an external signal (so human input in the form of token sequences that are actually carefully curated for coherence) the models become more and more incoherent. Sounds like you're on board with that. The next step is that we're quickly devaluing money spent on human creativity and the world is awash in LLM garbage. So the human signal *is* disappearing.

                                      quantensalat@scicomm.xyzQ This user is from outside of this forum
                                      quantensalat@scicomm.xyzQ This user is from outside of this forum
                                      quantensalat@scicomm.xyz
                                      wrote sidst redigeret af
                                      #86

                                      @wronglang @devsimsek Yes, sure. I mean I can imagine it improving somewhat still, like when you augment your training set for image recognition by adding noise to a smaller set, but only to a point before it goes downhill from feedback.

                                      No, my gut feeling is rather that there have to be much more effective ways to train a model than to brute force funnel billions of pages of text to a transformer which blindly fits relations between words and structures without understanding them, that seems like doing it the hard way, even if I'm not expert enough to tell you what an alternative would look like

                                      1 Reply Last reply
                                      0
                                      • tokeriis@helvede.netT tokeriis@helvede.net shared this topic
                                      Svar
                                      • Svar som emne
                                      Login for at svare
                                      • Ældste til nyeste
                                      • Nyeste til ældste
                                      • Most Votes


                                      • Log ind

                                      • Har du ikke en konto? Tilmeld

                                      • Login or register to search.
                                      Powered by NodeBB Contributors
                                      Graciously hosted by data.coop
                                      • First post
                                        Last post
                                      0
                                      • Hjem
                                      • Seneste
                                      • Etiketter
                                      • Populære
                                      • Verden
                                      • Bruger
                                      • Grupper