Skip to content
  • Hjem
  • Seneste
  • Etiketter
  • Populære
  • Verden
  • Bruger
  • Grupper
Temaer
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Kollaps
FARVEL BIG TECH
  1. Forside
  2. Ikke-kategoriseret
  3. Researchers just mathematically proved that AI can't recursively self-improve its way to superintelligence.

Researchers just mathematically proved that AI can't recursively self-improve its way to superintelligence.

Planlagt Fastgjort Låst Flyttet Ikke-kategoriseret
machinelearningllmresearch
86 Indlæg 57 Posters 0 Visninger
  • Ældste til nyeste
  • Nyeste til ældste
  • Most Votes
Svar
  • Svar som emne
Login for at svare
Denne tråd er blevet slettet. Kun brugere med emne behandlings privilegier kan se den.
  • lunadragofelis@void.lgbtL lunadragofelis@void.lgbt
    @devsimsek I think AGI and self-improvement is possible. But definitely not with the technology (neural LLMs) that is being marketed as "AI" today.

    I think that AGI needs to be able to think logically.
    vanuphantom@zug.networkV This user is from outside of this forum
    vanuphantom@zug.networkV This user is from outside of this forum
    vanuphantom@zug.network
    wrote sidst redigeret af
    #21

    @LunaDragofelis
    @devsimsek ^ this tbh. The single-minded focus on scaling LLMs is seemingly caused by parts of the AI crowd being hammers that view every problem as a nail.

    The path to better products will involve many different technologies being glued together.

    1 Reply Last reply
    0
    • devsimsek@universeodon.comD devsimsek@universeodon.com

      Researchers just mathematically proved that AI can't recursively self-improve its way to superintelligence.

      Not "we think it's unlikely." Not "it seems hard." Formally proved.

      The model doesn't climb toward AGI — it slowly forgets what reality looks like. They call it model collapse. The math calls it inevitable.
      I wrote about it 👇

      https://smsk.dev/2026/04/26/ai-cannot-self-improve-and-math-behind-proves-it/

      #AI #MachineLearning #LLM #Research

      timorl@social.wuatek.isT This user is from outside of this forum
      timorl@social.wuatek.isT This user is from outside of this forum
      timorl@social.wuatek.is
      wrote sidst redigeret af
      #22

      @devsimsek@universeodon.com I don’t think this is the usual formulation of RSI though – in the one I know the input of the AI is not it’s output, but the environment plus (a representation of) itself. So I would say the way the article (and blogpost) formulates its thesis is misleading.

      (I used to worry about AGI and the current focus on LLMs stopped that. Not because such a self-improvement loop is impossible (which I don’t expect it to be tbh), but rather because it’s extremely unlikely due to their very low homoiconicity.)

      1 Reply Last reply
      0
      • devsimsek@universeodon.comD devsimsek@universeodon.com

        Researchers just mathematically proved that AI can't recursively self-improve its way to superintelligence.

        Not "we think it's unlikely." Not "it seems hard." Formally proved.

        The model doesn't climb toward AGI — it slowly forgets what reality looks like. They call it model collapse. The math calls it inevitable.
        I wrote about it 👇

        https://smsk.dev/2026/04/26/ai-cannot-self-improve-and-math-behind-proves-it/

        #AI #MachineLearning #LLM #Research

        yora@mastodon.gamedev.placeY This user is from outside of this forum
        yora@mastodon.gamedev.placeY This user is from outside of this forum
        yora@mastodon.gamedev.place
        wrote sidst redigeret af
        #23

        @devsimsek An inevitable melting into slop.
        Every time you copy something, you lose some detail. Continue long enough and you eventually do get a Singularity. All information compressed to a single "1".

        1 Reply Last reply
        0
        • thearrivingdeparture@mastodon.socialT thearrivingdeparture@mastodon.social

          @devsimsek The paper doesn't prove that. It proves that "if the proportion of exogenous, externally grounded signal vanishes asymptotically, the system undergoes degenerative dynamics."
          The necessary asymptotic condition is not met in real use.

          bifouba@kolektiva.socialB This user is from outside of this forum
          bifouba@kolektiva.socialB This user is from outside of this forum
          bifouba@kolektiva.social
          wrote sidst redigeret af
          #24

          @thearrivingdeparture @devsimsek

          Even if that were true, it would still be in contrast to, say, being able to play zillions of chess games against yourself to become a stronger player, which does work.

          1 Reply Last reply
          0
          • devsimsek@universeodon.comD devsimsek@universeodon.com

            Researchers just mathematically proved that AI can't recursively self-improve its way to superintelligence.

            Not "we think it's unlikely." Not "it seems hard." Formally proved.

            The model doesn't climb toward AGI — it slowly forgets what reality looks like. They call it model collapse. The math calls it inevitable.
            I wrote about it 👇

            https://smsk.dev/2026/04/26/ai-cannot-self-improve-and-math-behind-proves-it/

            #AI #MachineLearning #LLM #Research

            becomethewaifu@tech.lgbtB This user is from outside of this forum
            becomethewaifu@tech.lgbtB This user is from outside of this forum
            becomethewaifu@tech.lgbt
            wrote sidst redigeret af
            #25

            @devsimsek This was my intuition as soon as I understood that they were fundamentally just a statistical distribution predictive model: of course if you feed the output of the statistics machine back into itself it's going to degrade as a model, that's just how statistical modeling works... But still nice that someone actually "did the math" to prove it though.

            What's particularly interesting about this process from what I understand is that in isolation none of the synthetic data looks "wrong" which is what makes it so 'tempting' for the bubble-pumpers desperate for training data. And despite none of it looking that bad, the entire model can easily collapse into an incoherent pile of gibberish with enough of it due to subtle statistical butterflies.

            1 Reply Last reply
            0
            • devsimsek@universeodon.comD devsimsek@universeodon.com

              Researchers just mathematically proved that AI can't recursively self-improve its way to superintelligence.

              Not "we think it's unlikely." Not "it seems hard." Formally proved.

              The model doesn't climb toward AGI — it slowly forgets what reality looks like. They call it model collapse. The math calls it inevitable.
              I wrote about it 👇

              https://smsk.dev/2026/04/26/ai-cannot-self-improve-and-math-behind-proves-it/

              #AI #MachineLearning #LLM #Research

              sudo_eatpant@critter.cafeS This user is from outside of this forum
              sudo_eatpant@critter.cafeS This user is from outside of this forum
              sudo_eatpant@critter.cafe
              wrote sidst redigeret af
              #26

              @devsimsek sicko-to-sicko communication

              1 Reply Last reply
              0
              • devsimsek@universeodon.comD devsimsek@universeodon.com

                Researchers just mathematically proved that AI can't recursively self-improve its way to superintelligence.

                Not "we think it's unlikely." Not "it seems hard." Formally proved.

                The model doesn't climb toward AGI — it slowly forgets what reality looks like. They call it model collapse. The math calls it inevitable.
                I wrote about it 👇

                https://smsk.dev/2026/04/26/ai-cannot-self-improve-and-math-behind-proves-it/

                #AI #MachineLearning #LLM #Research

                jens@social.finkhaeuser.deJ This user is from outside of this forum
                jens@social.finkhaeuser.deJ This user is from outside of this forum
                jens@social.finkhaeuser.de
                wrote sidst redigeret af
                #27

                @devsimsek Compare how cryptographic RNGs are usually Pseudo-RNGs fed with entropy, and which fail to output random-approximate values (of a given strength) once the entropy falls too low.

                It's almost as if there is a pattern to this.

                1 Reply Last reply
                0
                • devsimsek@universeodon.comD devsimsek@universeodon.com

                  Researchers just mathematically proved that AI can't recursively self-improve its way to superintelligence.

                  Not "we think it's unlikely." Not "it seems hard." Formally proved.

                  The model doesn't climb toward AGI — it slowly forgets what reality looks like. They call it model collapse. The math calls it inevitable.
                  I wrote about it 👇

                  https://smsk.dev/2026/04/26/ai-cannot-self-improve-and-math-behind-proves-it/

                  #AI #MachineLearning #LLM #Research

                  saltywizard@beige.partyS This user is from outside of this forum
                  saltywizard@beige.partyS This user is from outside of this forum
                  saltywizard@beige.party
                  wrote sidst redigeret af
                  #28

                  @devsimsek

                  i'm here for the inevitable model collapse. let's immanentize this bitch!

                  1 Reply Last reply
                  0
                  • devsimsek@universeodon.comD devsimsek@universeodon.com

                    Researchers just mathematically proved that AI can't recursively self-improve its way to superintelligence.

                    Not "we think it's unlikely." Not "it seems hard." Formally proved.

                    The model doesn't climb toward AGI — it slowly forgets what reality looks like. They call it model collapse. The math calls it inevitable.
                    I wrote about it 👇

                    https://smsk.dev/2026/04/26/ai-cannot-self-improve-and-math-behind-proves-it/

                    #AI #MachineLearning #LLM #Research

                    ghostinthenet@hachyderm.ioG This user is from outside of this forum
                    ghostinthenet@hachyderm.ioG This user is from outside of this forum
                    ghostinthenet@hachyderm.io
                    wrote sidst redigeret af
                    #29

                    @devsimsek So... let me get this straight. Autocoprophagic #RSI •doesn't• lead to #AGI? Say it ain't so! 😏 #AI

                    devsimsek@universeodon.comD 1 Reply Last reply
                    0
                    • quantensalat@scicomm.xyzQ quantensalat@scicomm.xyz

                      @devsimsek Is that a thing people believe, that LLMs generate themselves towards the singularity simply by eating their own output and no other feedback?

                      dpiponi@mathstodon.xyzD This user is from outside of this forum
                      dpiponi@mathstodon.xyzD This user is from outside of this forum
                      dpiponi@mathstodon.xyz
                      wrote sidst redigeret af
                      #30

                      @Quantensalat @devsimsek I'm sure you'll find plenty of straw men who do

                      quantensalat@scicomm.xyzQ 1 Reply Last reply
                      0
                      • tallsimon@mstdn.caT tallsimon@mstdn.ca

                        @devsimsek Chatting with U Toronto AI profs 6, 7 years ago, I posed a problem.

                        "Teach your AI everything about whole, integer, rational and real numbers. Ask it to solve a problem that requires it to invent complex numbers."

                        Reply: "Oh... It doesn't work that way."

                        I knew that, but the ability to frame your observations as the product of a higher order system is IMHO key to what we call "intelligence". Collecting evidence that can disprove your hypothesis is science.

                        LLM approaches are neither, in a very expensive way.

                        I'll have to read the paper, though. I'm looking forward to the AI equivalent of Goedel's Theorem that shuts down this annoying iteration of the field.

                        lerxst@az.socialL This user is from outside of this forum
                        lerxst@az.socialL This user is from outside of this forum
                        lerxst@az.social
                        wrote sidst redigeret af
                        #31

                        @TallSimon @devsimsek I haven’t looked at the proof, but I wonder if Gödel plays a role in it. Seems like at least Gödel would strongly imply this new proof.

                        1 Reply Last reply
                        0
                        • devsimsek@universeodon.comD devsimsek@universeodon.com

                          Researchers just mathematically proved that AI can't recursively self-improve its way to superintelligence.

                          Not "we think it's unlikely." Not "it seems hard." Formally proved.

                          The model doesn't climb toward AGI — it slowly forgets what reality looks like. They call it model collapse. The math calls it inevitable.
                          I wrote about it 👇

                          https://smsk.dev/2026/04/26/ai-cannot-self-improve-and-math-behind-proves-it/

                          #AI #MachineLearning #LLM #Research

                          huxley@furry.engineerH This user is from outside of this forum
                          huxley@furry.engineerH This user is from outside of this forum
                          huxley@furry.engineer
                          wrote sidst redigeret af
                          #32

                          @devsimsek this is one of those things that seemed intuitive to us skeptics but it's great to see it proven

                          lioh@social.anoxinon.deL 1 Reply Last reply
                          0
                          • dpiponi@mathstodon.xyzD dpiponi@mathstodon.xyz

                            @Quantensalat @devsimsek I'm sure you'll find plenty of straw men who do

                            quantensalat@scicomm.xyzQ This user is from outside of this forum
                            quantensalat@scicomm.xyzQ This user is from outside of this forum
                            quantensalat@scicomm.xyz
                            wrote sidst redigeret af
                            #33

                            @dpiponi @devsimsek I find the paper interesting but I would like to understand the exact
                            premises. "AI" is not equal to gen AI or LLMs, it probably makes little sense to sell it as a general statement about "AI"

                            devsimsek@universeodon.comD 1 Reply Last reply
                            0
                            • devsimsek@universeodon.comD devsimsek@universeodon.com

                              Researchers just mathematically proved that AI can't recursively self-improve its way to superintelligence.

                              Not "we think it's unlikely." Not "it seems hard." Formally proved.

                              The model doesn't climb toward AGI — it slowly forgets what reality looks like. They call it model collapse. The math calls it inevitable.
                              I wrote about it 👇

                              https://smsk.dev/2026/04/26/ai-cannot-self-improve-and-math-behind-proves-it/

                              #AI #MachineLearning #LLM #Research

                              srazkvt@tech.lgbtS This user is from outside of this forum
                              srazkvt@tech.lgbtS This user is from outside of this forum
                              srazkvt@tech.lgbt
                              wrote sidst redigeret af
                              #34

                              @devsimsek wow, almost as if this was a problem known as overtraining for well over 30 years

                              devsimsek@universeodon.comD 1 Reply Last reply
                              0
                              • devsimsek@universeodon.comD devsimsek@universeodon.com

                                Researchers just mathematically proved that AI can't recursively self-improve its way to superintelligence.

                                Not "we think it's unlikely." Not "it seems hard." Formally proved.

                                The model doesn't climb toward AGI — it slowly forgets what reality looks like. They call it model collapse. The math calls it inevitable.
                                I wrote about it 👇

                                https://smsk.dev/2026/04/26/ai-cannot-self-improve-and-math-behind-proves-it/

                                #AI #MachineLearning #LLM #Research

                                focaccina@troet.cafeF This user is from outside of this forum
                                focaccina@troet.cafeF This user is from outside of this forum
                                focaccina@troet.cafe
                                wrote sidst redigeret af
                                #35

                                @devsimsek it's the only thing that makes sense if you know just a little about how they work (I don't know more than a little)
                                Like if you output whatever is most likely, and input that again, it's only logical (at least to me) that eventually you'll get a mushy average

                                1 Reply Last reply
                                0
                                • devsimsek@universeodon.comD devsimsek@universeodon.com

                                  Researchers just mathematically proved that AI can't recursively self-improve its way to superintelligence.

                                  Not "we think it's unlikely." Not "it seems hard." Formally proved.

                                  The model doesn't climb toward AGI — it slowly forgets what reality looks like. They call it model collapse. The math calls it inevitable.
                                  I wrote about it 👇

                                  https://smsk.dev/2026/04/26/ai-cannot-self-improve-and-math-behind-proves-it/

                                  #AI #MachineLearning #LLM #Research

                                  pastathief@indiepocalypse.socialP This user is from outside of this forum
                                  pastathief@indiepocalypse.socialP This user is from outside of this forum
                                  pastathief@indiepocalypse.social
                                  wrote sidst redigeret af
                                  #36

                                  @devsimsek This feels like a weird argument, because it proves a version that I've never heard anyone arguing for. Like, when I've heard people talk about AI itself accelerating AI's improvement (on both pro and con sides), the argument wasn't that AI would self-train on its own output. The argument was that AI would replace AI developers and accelerate the development of better AI code.

                                  1 Reply Last reply
                                  0
                                  • srazkvt@tech.lgbtS srazkvt@tech.lgbt

                                    @devsimsek wow, almost as if this was a problem known as overtraining for well over 30 years

                                    devsimsek@universeodon.comD This user is from outside of this forum
                                    devsimsek@universeodon.comD This user is from outside of this forum
                                    devsimsek@universeodon.com
                                    wrote sidst redigeret af
                                    #37

                                    @SRAZKVT Exactly.

                                    1 Reply Last reply
                                    0
                                    • devsimsek@universeodon.comD devsimsek@universeodon.com

                                      Researchers just mathematically proved that AI can't recursively self-improve its way to superintelligence.

                                      Not "we think it's unlikely." Not "it seems hard." Formally proved.

                                      The model doesn't climb toward AGI — it slowly forgets what reality looks like. They call it model collapse. The math calls it inevitable.
                                      I wrote about it 👇

                                      https://smsk.dev/2026/04/26/ai-cannot-self-improve-and-math-behind-proves-it/

                                      #AI #MachineLearning #LLM #Research

                                      kaidu@mastodon.socialK This user is from outside of this forum
                                      kaidu@mastodon.socialK This user is from outside of this forum
                                      kaidu@mastodon.social
                                      wrote sidst redigeret af
                                      #38

                                      @devsimsek Nobody ever claimed that llms get better by being trained on their own synthetic data. This blog post is very misleading.

                                      The idea of self-improvement and singularity is that llms write improved versions of their own codebase and perform the research and experiments for coming up with better models themselves.
                                      The idea of singularity is interesting but also full of hidden assumptions. I'm always confused when people act like singularity would exist. It's just science fiction.

                                      devsimsek@universeodon.comD 1 Reply Last reply
                                      0
                                      • quantensalat@scicomm.xyzQ quantensalat@scicomm.xyz

                                        @dpiponi @devsimsek I find the paper interesting but I would like to understand the exact
                                        premises. "AI" is not equal to gen AI or LLMs, it probably makes little sense to sell it as a general statement about "AI"

                                        devsimsek@universeodon.comD This user is from outside of this forum
                                        devsimsek@universeodon.comD This user is from outside of this forum
                                        devsimsek@universeodon.com
                                        wrote sidst redigeret af
                                        #39

                                        @Quantensalat @dpiponi That's what I hate about these companies.

                                        quantensalat@scicomm.xyzQ 1 Reply Last reply
                                        0
                                        • ghostinthenet@hachyderm.ioG ghostinthenet@hachyderm.io

                                          @devsimsek So... let me get this straight. Autocoprophagic #RSI •doesn't• lead to #AGI? Say it ain't so! 😏 #AI

                                          devsimsek@universeodon.comD This user is from outside of this forum
                                          devsimsek@universeodon.comD This user is from outside of this forum
                                          devsimsek@universeodon.com
                                          wrote sidst redigeret af
                                          #40

                                          @ghostinthenet Yep 😄

                                          1 Reply Last reply
                                          0
                                          Svar
                                          • Svar som emne
                                          Login for at svare
                                          • Ældste til nyeste
                                          • Nyeste til ældste
                                          • Most Votes


                                          • Log ind

                                          • Har du ikke en konto? Tilmeld

                                          • Login or register to search.
                                          Powered by NodeBB Contributors
                                          Graciously hosted by data.coop
                                          • First post
                                            Last post
                                          0
                                          • Hjem
                                          • Seneste
                                          • Etiketter
                                          • Populære
                                          • Verden
                                          • Bruger
                                          • Grupper