Skip to content
  • Hjem
  • Seneste
  • Etiketter
  • Populære
  • Verden
  • Bruger
  • Grupper
Temaer
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Kollaps
FARVEL BIG TECH
  1. Forside
  2. Ikke-kategoriseret
  3. Researchers just mathematically proved that AI can't recursively self-improve its way to superintelligence.

Researchers just mathematically proved that AI can't recursively self-improve its way to superintelligence.

Planlagt Fastgjort Låst Flyttet Ikke-kategoriseret
machinelearningllmresearch
86 Indlæg 57 Posters 0 Visninger
  • Ældste til nyeste
  • Nyeste til ældste
  • Most Votes
Svar
  • Svar som emne
Login for at svare
Denne tråd er blevet slettet. Kun brugere med emne behandlings privilegier kan se den.
  • devsimsek@universeodon.comD devsimsek@universeodon.com

    @Quantensalat @dpiponi That's what I hate about these companies.

    quantensalat@scicomm.xyzQ This user is from outside of this forum
    quantensalat@scicomm.xyzQ This user is from outside of this forum
    quantensalat@scicomm.xyz
    wrote sidst redigeret af
    #43

    @devsimsek @dpiponi that they act like AI=LLMs?

    devsimsek@universeodon.comD dpiponi@mathstodon.xyzD 2 Replies Last reply
    0
    • knowattitude@m.ai6yr.orgK This user is from outside of this forum
      knowattitude@m.ai6yr.orgK This user is from outside of this forum
      knowattitude@m.ai6yr.org
      wrote sidst redigeret af
      #44

      @anne_twain @devsimsek
      "That's like a high school history class having their own essays as research material." - a memorable phrase.

      1 Reply Last reply
      0
      • quantensalat@scicomm.xyzQ quantensalat@scicomm.xyz

        @devsimsek @dpiponi that they act like AI=LLMs?

        devsimsek@universeodon.comD This user is from outside of this forum
        devsimsek@universeodon.comD This user is from outside of this forum
        devsimsek@universeodon.com
        wrote sidst redigeret af
        #45

        @Quantensalat @dpiponi yes. I did used the same tactic while naming my post as satire. its annoying....

        1 Reply Last reply
        0
        • quantensalat@scicomm.xyzQ quantensalat@scicomm.xyz

          @devsimsek @dpiponi that they act like AI=LLMs?

          dpiponi@mathstodon.xyzD This user is from outside of this forum
          dpiponi@mathstodon.xyzD This user is from outside of this forum
          dpiponi@mathstodon.xyz
          wrote sidst redigeret af
          #46

          @Quantensalat @devsimsek There's a setup around equations (9) and (10) where the distribution used for training the next generation is a linear combination of the distribution your current generation generates and external data. As the amount of external data goes to zero, you expect model collapse. This is hardly surprising. I don't know anyone who expects you can just keep training based on previous results and expect something radically new to happen. (Though something *useful* can happen - eg. you may improve performance this way. See "rectification" in flow-matching.)

          Note that this doesn't rule out all forms of self-training - just one kind. As a concrete example, an LLM trained to generate code can learn from the output of the generated code. Such output is, in some sense, exogenous.

          devsimsek@universeodon.comD rootwyrm@weird.autosR dpiponi@mathstodon.xyzD 3 Replies Last reply
          0
          • devsimsek@universeodon.comD devsimsek@universeodon.com

            Researchers just mathematically proved that AI can't recursively self-improve its way to superintelligence.

            Not "we think it's unlikely." Not "it seems hard." Formally proved.

            The model doesn't climb toward AGI — it slowly forgets what reality looks like. They call it model collapse. The math calls it inevitable.
            I wrote about it 👇

            https://smsk.dev/2026/04/26/ai-cannot-self-improve-and-math-behind-proves-it/

            #AI #MachineLearning #LLM #Research

            flaki@flaki.socialF This user is from outside of this forum
            flaki@flaki.socialF This user is from outside of this forum
            flaki@flaki.social
            wrote sidst redigeret af
            #47

            @devsimsek

            > Human-generated data is irreplaceable. The “internet is running out of training data” problem just got mathematically formalized.

            Yeah I think the AI con mob has realized this already (but of course not saying the quiet part out loud). With Satya whining about people calling it slop and the AI industry trying to force it down everyone's throats no matter the cost (e.g. Copilot) I think they realize that there is only so much internet and historical content they can use to train their models - now they want *you* to help train it for them. Prompt Claude to spit out some code, ask Copilot for a PR review, and _interact_ with it, pointing out where it was stupid, confirming when it did a good job, by virtue of interacting with an AI model you are improving it with this exact, essential human input.

            alahmnat@woof.techA 1 Reply Last reply
            0
            • dpiponi@mathstodon.xyzD dpiponi@mathstodon.xyz

              @Quantensalat @devsimsek There's a setup around equations (9) and (10) where the distribution used for training the next generation is a linear combination of the distribution your current generation generates and external data. As the amount of external data goes to zero, you expect model collapse. This is hardly surprising. I don't know anyone who expects you can just keep training based on previous results and expect something radically new to happen. (Though something *useful* can happen - eg. you may improve performance this way. See "rectification" in flow-matching.)

              Note that this doesn't rule out all forms of self-training - just one kind. As a concrete example, an LLM trained to generate code can learn from the output of the generated code. Such output is, in some sense, exogenous.

              devsimsek@universeodon.comD This user is from outside of this forum
              devsimsek@universeodon.comD This user is from outside of this forum
              devsimsek@universeodon.com
              wrote sidst redigeret af
              #48

              @dpiponi @Quantensalat Yep, i also did imply this on my post's last remarks. https://smsk.dev/2026/04/26/ai-cannot-self-improve-and-math-behind-proves-it/#:~:text=The%20smarter%20path%20–%20and%20what%20labs%20are%20quietly%20shifting%20toward%20–%20is%C2%A0better%20data%2C%20better%20curation%2C%20better%20grounding%20in%20reality.%20Which%2C%20ironically%2C%20means%20humans%20stay%20in%20the%20loop%20longer%20than%20the%20singularitarians%20wanted.

              dpiponi@mathstodon.xyzD 1 Reply Last reply
              0
              • L laalsaas@c3d2.social

                @devsimsek this only means that LLMs can't provide their own training data, right? Could they still "invent" new algorithms, that make more of the existing data?

                devsimsek@universeodon.comD This user is from outside of this forum
                devsimsek@universeodon.comD This user is from outside of this forum
                devsimsek@universeodon.com
                wrote sidst redigeret af
                #49

                @laalsaas yep

                1 Reply Last reply
                0
                • kaidu@mastodon.socialK kaidu@mastodon.social

                  @devsimsek Nobody ever claimed that llms get better by being trained on their own synthetic data. This blog post is very misleading.

                  The idea of self-improvement and singularity is that llms write improved versions of their own codebase and perform the research and experiments for coming up with better models themselves.
                  The idea of singularity is interesting but also full of hidden assumptions. I'm always confused when people act like singularity would exist. It's just science fiction.

                  devsimsek@universeodon.comD This user is from outside of this forum
                  devsimsek@universeodon.comD This user is from outside of this forum
                  devsimsek@universeodon.com
                  wrote sidst redigeret af
                  #50

                  @kaidu Sure, the title is satirical, but I don't think that you have done a great job while reading it. Since both the article and my post specifically talk about one of the training methods...

                  1 Reply Last reply
                  0
                  • devsimsek@universeodon.comD devsimsek@universeodon.com

                    @dpiponi @Quantensalat Yep, i also did imply this on my post's last remarks. https://smsk.dev/2026/04/26/ai-cannot-self-improve-and-math-behind-proves-it/#:~:text=The%20smarter%20path%20–%20and%20what%20labs%20are%20quietly%20shifting%20toward%20–%20is%C2%A0better%20data%2C%20better%20curation%2C%20better%20grounding%20in%20reality.%20Which%2C%20ironically%2C%20means%20humans%20stay%20in%20the%20loop%20longer%20than%20the%20singularitarians%20wanted.

                    dpiponi@mathstodon.xyzD This user is from outside of this forum
                    dpiponi@mathstodon.xyzD This user is from outside of this forum
                    dpiponi@mathstodon.xyz
                    wrote sidst redigeret af
                    #51

                    @devsimsek @Quantensalat Yeah, I did kinda guess that's what you meant by "better grounding in reality" although it could also mean real reality 🙂

                    1 Reply Last reply
                    0
                    • devsimsek@universeodon.comD devsimsek@universeodon.com

                      Researchers just mathematically proved that AI can't recursively self-improve its way to superintelligence.

                      Not "we think it's unlikely." Not "it seems hard." Formally proved.

                      The model doesn't climb toward AGI — it slowly forgets what reality looks like. They call it model collapse. The math calls it inevitable.
                      I wrote about it 👇

                      https://smsk.dev/2026/04/26/ai-cannot-self-improve-and-math-behind-proves-it/

                      #AI #MachineLearning #LLM #Research

                      jpaskaruk@growers.socialJ This user is from outside of this forum
                      jpaskaruk@growers.socialJ This user is from outside of this forum
                      jpaskaruk@growers.social
                      wrote sidst redigeret af
                      #52

                      @devsimsek The Habsburgs had a better chance of evolving into superhumans.

                      1 Reply Last reply
                      0
                      • kieraaa@mastodon.artK kieraaa@mastodon.art

                        @devsimsek if i eat my own shit repeatedly will i become a singularity

                        devsimsek@universeodon.comD This user is from outside of this forum
                        devsimsek@universeodon.comD This user is from outside of this forum
                        devsimsek@universeodon.com
                        wrote sidst redigeret af
                        #53

                        @kieraaa I don't know, someone should simulate that.

                        1 Reply Last reply
                        0
                        • devsimsek@universeodon.comD devsimsek@universeodon.com

                          Researchers just mathematically proved that AI can't recursively self-improve its way to superintelligence.

                          Not "we think it's unlikely." Not "it seems hard." Formally proved.

                          The model doesn't climb toward AGI — it slowly forgets what reality looks like. They call it model collapse. The math calls it inevitable.
                          I wrote about it 👇

                          https://smsk.dev/2026/04/26/ai-cannot-self-improve-and-math-behind-proves-it/

                          #AI #MachineLearning #LLM #Research

                          aka_quant_noir@hcommons.socialA This user is from outside of this forum
                          aka_quant_noir@hcommons.socialA This user is from outside of this forum
                          aka_quant_noir@hcommons.social
                          wrote sidst redigeret af
                          #54

                          @devsimsek

                          They so want AI to evolve like humans did, but faster. But on the individual timescale intelligence is a temporary affliction. The body and mind deteriorate. And there's no Moore's Law for neurons so good luck brute forcing billionaire intelligence.

                          alahmnat@woof.techA 1 Reply Last reply
                          0
                          • aka_quant_noir@hcommons.socialA aka_quant_noir@hcommons.social

                            @devsimsek

                            They so want AI to evolve like humans did, but faster. But on the individual timescale intelligence is a temporary affliction. The body and mind deteriorate. And there's no Moore's Law for neurons so good luck brute forcing billionaire intelligence.

                            alahmnat@woof.techA This user is from outside of this forum
                            alahmnat@woof.techA This user is from outside of this forum
                            alahmnat@woof.tech
                            wrote sidst redigeret af
                            #55

                            @aka_quant_noir @devsimsek Oh I think we've achieved billionaire intelligence already. I just have a much dimmer view of billionaires.

                            aka_quant_noir@hcommons.socialA 1 Reply Last reply
                            0
                            • flaki@flaki.socialF flaki@flaki.social

                              @devsimsek

                              > Human-generated data is irreplaceable. The “internet is running out of training data” problem just got mathematically formalized.

                              Yeah I think the AI con mob has realized this already (but of course not saying the quiet part out loud). With Satya whining about people calling it slop and the AI industry trying to force it down everyone's throats no matter the cost (e.g. Copilot) I think they realize that there is only so much internet and historical content they can use to train their models - now they want *you* to help train it for them. Prompt Claude to spit out some code, ask Copilot for a PR review, and _interact_ with it, pointing out where it was stupid, confirming when it did a good job, by virtue of interacting with an AI model you are improving it with this exact, essential human input.

                              alahmnat@woof.techA This user is from outside of this forum
                              alahmnat@woof.techA This user is from outside of this forum
                              alahmnat@woof.tech
                              wrote sidst redigeret af
                              #56

                              @flaki And it's why companies like Atlassian keep sending out notices that they're going to start using all of the data you've been forced to put on their servers because they took away local licensing, and feeding it into their ditto machines.

                              1 Reply Last reply
                              0
                              • devsimsek@universeodon.comD devsimsek@universeodon.com

                                Researchers just mathematically proved that AI can't recursively self-improve its way to superintelligence.

                                Not "we think it's unlikely." Not "it seems hard." Formally proved.

                                The model doesn't climb toward AGI — it slowly forgets what reality looks like. They call it model collapse. The math calls it inevitable.
                                I wrote about it 👇

                                https://smsk.dev/2026/04/26/ai-cannot-self-improve-and-math-behind-proves-it/

                                #AI #MachineLearning #LLM #Research

                                rootwyrm@weird.autosR This user is from outside of this forum
                                rootwyrm@weird.autosR This user is from outside of this forum
                                rootwyrm@weird.autos
                                wrote sidst redigeret af
                                #57

                                @devsimsek and this is old math, old theory, old knowledge. Gods do I wish I'd kept the various papers.

                                We've literally known for over two decades that LLMs are dead-ends. It's why IBM spent billions hyper-focusing Watson. We already knew more context just made it worse, regardless of compute or method. It's not 'intelligence,' it's a bad search function. There's shit demonstrating that back to the 1980's.

                                resuna@ohai.socialR 1 Reply Last reply
                                0
                                • lunadragofelis@void.lgbtL lunadragofelis@void.lgbt
                                  @devsimsek I think AGI and self-improvement is possible. But definitely not with the technology (neural LLMs) that is being marketed as "AI" today.

                                  I think that AGI needs to be able to think logically.
                                  A This user is from outside of this forum
                                  A This user is from outside of this forum
                                  aoeuidhtns@app.wafrn.net
                                  wrote sidst redigeret af
                                  #58

                                  @devsimsek@universeodon.com @LunaDragofelis@void.lgbt

                                  if you make agi able to think logically then the world ends.
                                  we need to stop all ai research. if you are researching ai, and are not actively trying to sabotage it, then everyone's going to die.

                                  1 Reply Last reply
                                  0
                                  • dpiponi@mathstodon.xyzD dpiponi@mathstodon.xyz

                                    @Quantensalat @devsimsek There's a setup around equations (9) and (10) where the distribution used for training the next generation is a linear combination of the distribution your current generation generates and external data. As the amount of external data goes to zero, you expect model collapse. This is hardly surprising. I don't know anyone who expects you can just keep training based on previous results and expect something radically new to happen. (Though something *useful* can happen - eg. you may improve performance this way. See "rectification" in flow-matching.)

                                    Note that this doesn't rule out all forms of self-training - just one kind. As a concrete example, an LLM trained to generate code can learn from the output of the generated code. Such output is, in some sense, exogenous.

                                    rootwyrm@weird.autosR This user is from outside of this forum
                                    rootwyrm@weird.autosR This user is from outside of this forum
                                    rootwyrm@weird.autos
                                    wrote sidst redigeret af
                                    #59

                                    @dpiponi @Quantensalat @devsimsek that part, that is ultimately a rehash of well-known theory. THAT part IIRC goes back to like the 1940's or 1950's.

                                    And it absolutely rules out all forms of 'self-training.' It is not just mathematically impossible but a total logical fallacy. How can a system with no reference make correct determinations? Simple: it can't.

                                    resuna@ohai.socialR 1 Reply Last reply
                                    0
                                    • rootwyrm@weird.autosR This user is from outside of this forum
                                      rootwyrm@weird.autosR This user is from outside of this forum
                                      rootwyrm@weird.autos
                                      wrote sidst redigeret af
                                      #60

                                      @anne_twain @devsimsek this requires two components LLMs do not, cannot, and will not ever have. Intent and originality.
                                      Researchers have done self-modifying targeted things. It takes no time at all for things to become impossible for humans to understand. This does not mean they are better. Usually they weren't. Even when hyper-focused with specific controls.

                                      1 Reply Last reply
                                      0
                                      • huxley@furry.engineerH huxley@furry.engineer

                                        @devsimsek this is one of those things that seemed intuitive to us skeptics but it's great to see it proven

                                        lioh@social.anoxinon.deL This user is from outside of this forum
                                        lioh@social.anoxinon.deL This user is from outside of this forum
                                        lioh@social.anoxinon.de
                                        wrote sidst redigeret af
                                        #61

                                        @huxley @devsimsek doesn't scepticism and intuation mitigate each other?

                                        1 Reply Last reply
                                        0
                                        • alahmnat@woof.techA alahmnat@woof.tech

                                          @aka_quant_noir @devsimsek Oh I think we've achieved billionaire intelligence already. I just have a much dimmer view of billionaires.

                                          aka_quant_noir@hcommons.socialA This user is from outside of this forum
                                          aka_quant_noir@hcommons.socialA This user is from outside of this forum
                                          aka_quant_noir@hcommons.social
                                          wrote sidst redigeret af
                                          #62

                                          @alahmnat @devsimsek
                                          I think we're in the billionaire intelligence decline phase. They're going nuts.

                                          1 Reply Last reply
                                          0
                                          Svar
                                          • Svar som emne
                                          Login for at svare
                                          • Ældste til nyeste
                                          • Nyeste til ældste
                                          • Most Votes


                                          • Log ind

                                          • Har du ikke en konto? Tilmeld

                                          • Login or register to search.
                                          Powered by NodeBB Contributors
                                          Graciously hosted by data.coop
                                          • First post
                                            Last post
                                          0
                                          • Hjem
                                          • Seneste
                                          • Etiketter
                                          • Populære
                                          • Verden
                                          • Bruger
                                          • Grupper