Skip to content
  • Hjem
  • Seneste
  • Etiketter
  • Populære
  • Verden
  • Bruger
  • Grupper
Temaer
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Kollaps
FARVEL BIG TECH
  1. Forside
  2. Ikke-kategoriseret
  3. Researchers just mathematically proved that AI can't recursively self-improve its way to superintelligence.

Researchers just mathematically proved that AI can't recursively self-improve its way to superintelligence.

Planlagt Fastgjort Låst Flyttet Ikke-kategoriseret
machinelearningllmresearch
86 Indlæg 57 Posters 0 Visninger
  • Ældste til nyeste
  • Nyeste til ældste
  • Most Votes
Svar
  • Svar som emne
Login for at svare
Denne tråd er blevet slettet. Kun brugere med emne behandlings privilegier kan se den.
  • devsimsek@universeodon.comD This user is from outside of this forum
    devsimsek@universeodon.comD This user is from outside of this forum
    devsimsek@universeodon.com
    wrote sidst redigeret af
    #1

    Researchers just mathematically proved that AI can't recursively self-improve its way to superintelligence.

    Not "we think it's unlikely." Not "it seems hard." Formally proved.

    The model doesn't climb toward AGI — it slowly forgets what reality looks like. They call it model collapse. The math calls it inevitable.
    I wrote about it 👇

    https://smsk.dev/2026/04/26/ai-cannot-self-improve-and-math-behind-proves-it/

    #AI #MachineLearning #LLM #Research

    quantensalat@scicomm.xyzQ grob@mstdn.socialG jargoggles@kolektiva.socialJ keldrim@meow.socialK kieraaa@mastodon.artK 43 Replies Last reply
    1
    0
    • devsimsek@universeodon.comD devsimsek@universeodon.com

      Researchers just mathematically proved that AI can't recursively self-improve its way to superintelligence.

      Not "we think it's unlikely." Not "it seems hard." Formally proved.

      The model doesn't climb toward AGI — it slowly forgets what reality looks like. They call it model collapse. The math calls it inevitable.
      I wrote about it 👇

      https://smsk.dev/2026/04/26/ai-cannot-self-improve-and-math-behind-proves-it/

      #AI #MachineLearning #LLM #Research

      quantensalat@scicomm.xyzQ This user is from outside of this forum
      quantensalat@scicomm.xyzQ This user is from outside of this forum
      quantensalat@scicomm.xyz
      wrote sidst redigeret af
      #2

      @devsimsek Is that a thing people believe, that LLMs generate themselves towards the singularity simply by eating their own output and no other feedback?

      musicman@mastodon.socialM drwho@masto.hackers.townD dpiponi@mathstodon.xyzD wronglang@bayes.clubW 4 Replies Last reply
      0
      • devsimsek@universeodon.comD devsimsek@universeodon.com

        Researchers just mathematically proved that AI can't recursively self-improve its way to superintelligence.

        Not "we think it's unlikely." Not "it seems hard." Formally proved.

        The model doesn't climb toward AGI — it slowly forgets what reality looks like. They call it model collapse. The math calls it inevitable.
        I wrote about it 👇

        https://smsk.dev/2026/04/26/ai-cannot-self-improve-and-math-behind-proves-it/

        #AI #MachineLearning #LLM #Research

        grob@mstdn.socialG This user is from outside of this forum
        grob@mstdn.socialG This user is from outside of this forum
        grob@mstdn.social
        wrote sidst redigeret af
        #3

        @devsimsek here's a video what recursive model collapse looks like. Warning: I found it visually disturbing without being able to word why exactly.

        https://manganiello.eu/objects/6f0da731-81ce-46fe-a67b-5c4dbe2d27e0

        I am also not sure whether the video is illustrating the exact same mathematical arguments from OP, but certainly related.

        1 Reply Last reply
        0
        • devsimsek@universeodon.comD devsimsek@universeodon.com

          Researchers just mathematically proved that AI can't recursively self-improve its way to superintelligence.

          Not "we think it's unlikely." Not "it seems hard." Formally proved.

          The model doesn't climb toward AGI — it slowly forgets what reality looks like. They call it model collapse. The math calls it inevitable.
          I wrote about it 👇

          https://smsk.dev/2026/04/26/ai-cannot-self-improve-and-math-behind-proves-it/

          #AI #MachineLearning #LLM #Research

          jargoggles@kolektiva.socialJ This user is from outside of this forum
          jargoggles@kolektiva.socialJ This user is from outside of this forum
          jargoggles@kolektiva.social
          wrote sidst redigeret af
          #4

          @devsimsek
          I put my butt on the copier and now I'm going to keep feeding copies of that picture into it until it prints out the Mona Lisa.

          1 Reply Last reply
          0
          • quantensalat@scicomm.xyzQ quantensalat@scicomm.xyz

            @devsimsek Is that a thing people believe, that LLMs generate themselves towards the singularity simply by eating their own output and no other feedback?

            musicman@mastodon.socialM This user is from outside of this forum
            musicman@mastodon.socialM This user is from outside of this forum
            musicman@mastodon.social
            wrote sidst redigeret af
            #5

            @Quantensalat @devsimsek tech bros have been claiming their AIs are alive for years so if the average person who knows nothing about computers thinks we already have AGI, who can really blame them. Anthropic all but claims to have invented Terminator.

            Maybe something like this will stop the panic.

            Which is not to say people shouldn't be concerned in general and very specifically about environmental impacts

            quantensalat@scicomm.xyzQ M 2 Replies Last reply
            0
            • devsimsek@universeodon.comD devsimsek@universeodon.com

              Researchers just mathematically proved that AI can't recursively self-improve its way to superintelligence.

              Not "we think it's unlikely." Not "it seems hard." Formally proved.

              The model doesn't climb toward AGI — it slowly forgets what reality looks like. They call it model collapse. The math calls it inevitable.
              I wrote about it 👇

              https://smsk.dev/2026/04/26/ai-cannot-self-improve-and-math-behind-proves-it/

              #AI #MachineLearning #LLM #Research

              keldrim@meow.socialK This user is from outside of this forum
              keldrim@meow.socialK This user is from outside of this forum
              keldrim@meow.social
              wrote sidst redigeret af
              #6

              @devsimsek The good news about this, when stuff explodes a lot idiots will loose a lot money and prove even further how stupid this bubble is.

              drwho@masto.hackers.townD 1 Reply Last reply
              0
              • devsimsek@universeodon.comD devsimsek@universeodon.com

                Researchers just mathematically proved that AI can't recursively self-improve its way to superintelligence.

                Not "we think it's unlikely." Not "it seems hard." Formally proved.

                The model doesn't climb toward AGI — it slowly forgets what reality looks like. They call it model collapse. The math calls it inevitable.
                I wrote about it 👇

                https://smsk.dev/2026/04/26/ai-cannot-self-improve-and-math-behind-proves-it/

                #AI #MachineLearning #LLM #Research

                kieraaa@mastodon.artK This user is from outside of this forum
                kieraaa@mastodon.artK This user is from outside of this forum
                kieraaa@mastodon.art
                wrote sidst redigeret af
                #7

                @devsimsek if i eat my own shit repeatedly will i become a singularity

                devsimsek@universeodon.comD 1 Reply Last reply
                0
                • quantensalat@scicomm.xyzQ quantensalat@scicomm.xyz

                  @devsimsek Is that a thing people believe, that LLMs generate themselves towards the singularity simply by eating their own output and no other feedback?

                  drwho@masto.hackers.townD This user is from outside of this forum
                  drwho@masto.hackers.townD This user is from outside of this forum
                  drwho@masto.hackers.town
                  wrote sidst redigeret af
                  #8

                  @Quantensalat @devsimsek Yes.

                  They have also never had a machine crash because a recursive operation overran the stack or used up all the memory.

                  1 Reply Last reply
                  0
                  • keldrim@meow.socialK keldrim@meow.social

                    @devsimsek The good news about this, when stuff explodes a lot idiots will loose a lot money and prove even further how stupid this bubble is.

                    drwho@masto.hackers.townD This user is from outside of this forum
                    drwho@masto.hackers.townD This user is from outside of this forum
                    drwho@masto.hackers.town
                    wrote sidst redigeret af
                    #9

                    @Keldrim @devsimsek But we'll still be out of jobs.

                    1 Reply Last reply
                    0
                    • musicman@mastodon.socialM musicman@mastodon.social

                      @Quantensalat @devsimsek tech bros have been claiming their AIs are alive for years so if the average person who knows nothing about computers thinks we already have AGI, who can really blame them. Anthropic all but claims to have invented Terminator.

                      Maybe something like this will stop the panic.

                      Which is not to say people shouldn't be concerned in general and very specifically about environmental impacts

                      quantensalat@scicomm.xyzQ This user is from outside of this forum
                      quantensalat@scicomm.xyzQ This user is from outside of this forum
                      quantensalat@scicomm.xyz
                      wrote sidst redigeret af
                      #10

                      @musicman @devsimsek As with all mathematical theorems, there's probably a not too far-fetched loophole circumventing some of their assumptions, doesn't mean skynet is becoming self-aware any time soon once that is the case.

                      wronglang@bayes.clubW 1 Reply Last reply
                      0
                      • devsimsek@universeodon.comD devsimsek@universeodon.com

                        Researchers just mathematically proved that AI can't recursively self-improve its way to superintelligence.

                        Not "we think it's unlikely." Not "it seems hard." Formally proved.

                        The model doesn't climb toward AGI — it slowly forgets what reality looks like. They call it model collapse. The math calls it inevitable.
                        I wrote about it 👇

                        https://smsk.dev/2026/04/26/ai-cannot-self-improve-and-math-behind-proves-it/

                        #AI #MachineLearning #LLM #Research

                        yth@mstdn.socialY This user is from outside of this forum
                        yth@mstdn.socialY This user is from outside of this forum
                        yth@mstdn.social
                        wrote sidst redigeret af
                        #11

                        @devsimsek Inbreeding is never a good idea that seems quite intuitive doesn’t it?

                        1 Reply Last reply
                        0
                        • devsimsek@universeodon.comD devsimsek@universeodon.com

                          Researchers just mathematically proved that AI can't recursively self-improve its way to superintelligence.

                          Not "we think it's unlikely." Not "it seems hard." Formally proved.

                          The model doesn't climb toward AGI — it slowly forgets what reality looks like. They call it model collapse. The math calls it inevitable.
                          I wrote about it 👇

                          https://smsk.dev/2026/04/26/ai-cannot-self-improve-and-math-behind-proves-it/

                          #AI #MachineLearning #LLM #Research

                          thearrivingdeparture@mastodon.socialT This user is from outside of this forum
                          thearrivingdeparture@mastodon.socialT This user is from outside of this forum
                          thearrivingdeparture@mastodon.social
                          wrote sidst redigeret af
                          #12

                          @devsimsek The paper doesn't prove that. It proves that "if the proportion of exogenous, externally grounded signal vanishes asymptotically, the system undergoes degenerative dynamics."
                          The necessary asymptotic condition is not met in real use.

                          bifouba@kolektiva.socialB 1 Reply Last reply
                          0
                          • devsimsek@universeodon.comD devsimsek@universeodon.com

                            Researchers just mathematically proved that AI can't recursively self-improve its way to superintelligence.

                            Not "we think it's unlikely." Not "it seems hard." Formally proved.

                            The model doesn't climb toward AGI — it slowly forgets what reality looks like. They call it model collapse. The math calls it inevitable.
                            I wrote about it 👇

                            https://smsk.dev/2026/04/26/ai-cannot-self-improve-and-math-behind-proves-it/

                            #AI #MachineLearning #LLM #Research

                            L This user is from outside of this forum
                            L This user is from outside of this forum
                            laalsaas@c3d2.social
                            wrote sidst redigeret af
                            #13

                            @devsimsek this only means that LLMs can't provide their own training data, right? Could they still "invent" new algorithms, that make more of the existing data?

                            devsimsek@universeodon.comD 1 Reply Last reply
                            0
                            • devsimsek@universeodon.comD devsimsek@universeodon.com

                              Researchers just mathematically proved that AI can't recursively self-improve its way to superintelligence.

                              Not "we think it's unlikely." Not "it seems hard." Formally proved.

                              The model doesn't climb toward AGI — it slowly forgets what reality looks like. They call it model collapse. The math calls it inevitable.
                              I wrote about it 👇

                              https://smsk.dev/2026/04/26/ai-cannot-self-improve-and-math-behind-proves-it/

                              #AI #MachineLearning #LLM #Research

                              tallsimon@mstdn.caT This user is from outside of this forum
                              tallsimon@mstdn.caT This user is from outside of this forum
                              tallsimon@mstdn.ca
                              wrote sidst redigeret af
                              #14

                              @devsimsek Chatting with U Toronto AI profs 6, 7 years ago, I posed a problem.

                              "Teach your AI everything about whole, integer, rational and real numbers. Ask it to solve a problem that requires it to invent complex numbers."

                              Reply: "Oh... It doesn't work that way."

                              I knew that, but the ability to frame your observations as the product of a higher order system is IMHO key to what we call "intelligence". Collecting evidence that can disprove your hypothesis is science.

                              LLM approaches are neither, in a very expensive way.

                              I'll have to read the paper, though. I'm looking forward to the AI equivalent of Goedel's Theorem that shuts down this annoying iteration of the field.

                              lerxst@az.socialL 1 Reply Last reply
                              0
                              • devsimsek@universeodon.comD devsimsek@universeodon.com

                                Researchers just mathematically proved that AI can't recursively self-improve its way to superintelligence.

                                Not "we think it's unlikely." Not "it seems hard." Formally proved.

                                The model doesn't climb toward AGI — it slowly forgets what reality looks like. They call it model collapse. The math calls it inevitable.
                                I wrote about it 👇

                                https://smsk.dev/2026/04/26/ai-cannot-self-improve-and-math-behind-proves-it/

                                #AI #MachineLearning #LLM #Research

                                mccrankyface@beige.partyM This user is from outside of this forum
                                mccrankyface@beige.partyM This user is from outside of this forum
                                mccrankyface@beige.party
                                wrote sidst redigeret af
                                #15

                                @devsimsek

                                "The curse of recursion" or, as I've been calling it for a while now, "a feedback loop of shit."

                                1 Reply Last reply
                                0
                                • devsimsek@universeodon.comD devsimsek@universeodon.com

                                  Researchers just mathematically proved that AI can't recursively self-improve its way to superintelligence.

                                  Not "we think it's unlikely." Not "it seems hard." Formally proved.

                                  The model doesn't climb toward AGI — it slowly forgets what reality looks like. They call it model collapse. The math calls it inevitable.
                                  I wrote about it 👇

                                  https://smsk.dev/2026/04/26/ai-cannot-self-improve-and-math-behind-proves-it/

                                  #AI #MachineLearning #LLM #Research

                                  joblakely@mastodon.socialJ This user is from outside of this forum
                                  joblakely@mastodon.socialJ This user is from outside of this forum
                                  joblakely@mastodon.social
                                  wrote sidst redigeret af
                                  #16

                                  @devsimsek
                                  This is great. I’ve been saying same since before it was conceived, but I expected it on the heels of Cambridge Analytica scandal & techbros desire to use it as a Maxwell’s Demon. If these AI developers cared about their product, they would be funding & not cutting research, sciences, the arts, quality free education, ensuring diversity of experience & insight. But they are going out of their way to destroy their own models with falsehoods of every kind.
                                  They & It lack discernment.

                                  1 Reply Last reply
                                  0
                                  • devsimsek@universeodon.comD devsimsek@universeodon.com

                                    Researchers just mathematically proved that AI can't recursively self-improve its way to superintelligence.

                                    Not "we think it's unlikely." Not "it seems hard." Formally proved.

                                    The model doesn't climb toward AGI — it slowly forgets what reality looks like. They call it model collapse. The math calls it inevitable.
                                    I wrote about it 👇

                                    https://smsk.dev/2026/04/26/ai-cannot-self-improve-and-math-behind-proves-it/

                                    #AI #MachineLearning #LLM #Research

                                    twit_terrorist@mastodont.catT This user is from outside of this forum
                                    twit_terrorist@mastodont.catT This user is from outside of this forum
                                    twit_terrorist@mastodont.cat
                                    wrote sidst redigeret af
                                    #17

                                    @devsimsek What should we trust, then? Researchers, or LinkedIn Unemployed AI Ambassadors?

                                    1 Reply Last reply
                                    0
                                    • devsimsek@universeodon.comD devsimsek@universeodon.com

                                      Researchers just mathematically proved that AI can't recursively self-improve its way to superintelligence.

                                      Not "we think it's unlikely." Not "it seems hard." Formally proved.

                                      The model doesn't climb toward AGI — it slowly forgets what reality looks like. They call it model collapse. The math calls it inevitable.
                                      I wrote about it 👇

                                      https://smsk.dev/2026/04/26/ai-cannot-self-improve-and-math-behind-proves-it/

                                      #AI #MachineLearning #LLM #Research

                                      noplasticshower@infosec.exchangeN This user is from outside of this forum
                                      noplasticshower@infosec.exchangeN This user is from outside of this forum
                                      noplasticshower@infosec.exchange
                                      wrote sidst redigeret af
                                      #18

                                      @devsimsek also see https://berryvilleiml.com/2026/01/10/recursive-pollution-and-model-collapse-are-not-the-same/

                                      This is part of a long running #ML research thread with big #MLsec impact

                                      devsimsek@universeodon.comD 1 Reply Last reply
                                      0
                                      • devsimsek@universeodon.comD devsimsek@universeodon.com

                                        Researchers just mathematically proved that AI can't recursively self-improve its way to superintelligence.

                                        Not "we think it's unlikely." Not "it seems hard." Formally proved.

                                        The model doesn't climb toward AGI — it slowly forgets what reality looks like. They call it model collapse. The math calls it inevitable.
                                        I wrote about it 👇

                                        https://smsk.dev/2026/04/26/ai-cannot-self-improve-and-math-behind-proves-it/

                                        #AI #MachineLearning #LLM #Research

                                        anthk@neopaquita.esA This user is from outside of this forum
                                        anthk@neopaquita.esA This user is from outside of this forum
                                        anthk@neopaquita.es
                                        wrote sidst redigeret af
                                        #19

                                        @devsimsek I said this a few years ago, and I am no
                                        Matematician. Simple combinatorics and discrete math
                                        over sets will tell you that.

                                        1 Reply Last reply
                                        0
                                        • devsimsek@universeodon.comD devsimsek@universeodon.com

                                          Researchers just mathematically proved that AI can't recursively self-improve its way to superintelligence.

                                          Not "we think it's unlikely." Not "it seems hard." Formally proved.

                                          The model doesn't climb toward AGI — it slowly forgets what reality looks like. They call it model collapse. The math calls it inevitable.
                                          I wrote about it 👇

                                          https://smsk.dev/2026/04/26/ai-cannot-self-improve-and-math-behind-proves-it/

                                          #AI #MachineLearning #LLM #Research

                                          lunadragofelis@void.lgbtL This user is from outside of this forum
                                          lunadragofelis@void.lgbtL This user is from outside of this forum
                                          lunadragofelis@void.lgbt
                                          wrote sidst redigeret af
                                          #20
                                          @devsimsek I think AGI and self-improvement is possible. But definitely not with the technology (neural LLMs) that is being marketed as "AI" today.

                                          I think that AGI needs to be able to think logically.
                                          vanuphantom@zug.networkV A 2 Replies Last reply
                                          0
                                          Svar
                                          • Svar som emne
                                          Login for at svare
                                          • Ældste til nyeste
                                          • Nyeste til ældste
                                          • Most Votes


                                          • Log ind

                                          • Har du ikke en konto? Tilmeld

                                          • Login or register to search.
                                          Powered by NodeBB Contributors
                                          Graciously hosted by data.coop
                                          • First post
                                            Last post
                                          0
                                          • Hjem
                                          • Seneste
                                          • Etiketter
                                          • Populære
                                          • Verden
                                          • Bruger
                                          • Grupper