Skip to content
  • Hjem
  • Seneste
  • Etiketter
  • Populære
  • Verden
  • Bruger
  • Grupper
Temaer
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Kollaps
FARVEL BIG TECH
  1. Forside
  2. Ikke-kategoriseret
  3. I am convinced we are on the verge of the first "AI agent worm".

I am convinced we are on the verge of the first "AI agent worm".

Planlagt Fastgjort Låst Flyttet Ikke-kategoriseret
45 Indlæg 27 Posters 0 Visninger
  • Ældste til nyeste
  • Nyeste til ældste
  • Most Votes
Svar
  • Svar som emne
Login for at svare
Denne tråd er blevet slettet. Kun brugere med emne behandlings privilegier kan se den.
  • cwebber@social.coopC cwebber@social.coop

    Here's another way to put it: if those using AI agents to codegen / review are the *initialization vectors*, we now also have a significant computing public health reason to discourage the use of these tools.

    Not that I think it will. But I'm convinced this is how patient zero will happen.

    cwebber@social.coopC This user is from outside of this forum
    cwebber@social.coopC This user is from outside of this forum
    cwebber@social.coop
    wrote sidst redigeret af
    #19

    I know some people are thinking "well pulling off this kind of thing, it would have to be controlled with intent of a human actor"

    It doesn't have to be.

    1. A human could *kick off* such a process, and then it runs away from them.
    2. It wouldn't even require a specific prompt to kick off a worm. There's enough scifi out there for this to be something any one of the barely-monitored openclaw agents could determine it should do.

    Whether it's kicked off by a human explicitly or a stray agent, it doesn't require "intentionality". Biological viruses don't have interiority / intentionality, and yet are major threats that reproduce and adapt.

    vv@solarpunk.moeV arnebab@rollenspiel.socialA 2 Replies Last reply
    0
    • cwebber@social.coopC cwebber@social.coop

      @faoluin well I still prompt @vv

      bean@twoot.siteB This user is from outside of this forum
      bean@twoot.siteB This user is from outside of this forum
      bean@twoot.site
      wrote sidst redigeret af
      #20

      @cwebber @faoluin @vv isn't vae a vvorm?

      vv@solarpunk.moeV 1 Reply Last reply
      0
      • cwebber@social.coopC cwebber@social.coop

        I know some people are thinking "well pulling off this kind of thing, it would have to be controlled with intent of a human actor"

        It doesn't have to be.

        1. A human could *kick off* such a process, and then it runs away from them.
        2. It wouldn't even require a specific prompt to kick off a worm. There's enough scifi out there for this to be something any one of the barely-monitored openclaw agents could determine it should do.

        Whether it's kicked off by a human explicitly or a stray agent, it doesn't require "intentionality". Biological viruses don't have interiority / intentionality, and yet are major threats that reproduce and adapt.

        vv@solarpunk.moeV This user is from outside of this forum
        vv@solarpunk.moeV This user is from outside of this forum
        vv@solarpunk.moe
        wrote sidst redigeret af
        #21

        @cwebber what i think is interesting about this is the potential for it to get so out of control that they have to pull the plug on the entire agent service

        cwebber@social.coopC 1 Reply Last reply
        0
        • vv@solarpunk.moeV vv@solarpunk.moe

          @cwebber what i think is interesting about this is the potential for it to get so out of control that they have to pull the plug on the entire agent service

          cwebber@social.coopC This user is from outside of this forum
          cwebber@social.coopC This user is from outside of this forum
          cwebber@social.coop
          wrote sidst redigeret af
          #22

          @vv Yeah. I mean, local models *might* be able to pull this off but right now Claude is the most likely candidate, it's the most capable. But even then, the most capable open model that is capable of doing such damage on its own is somewhere around a gigabyte, not a small download.

          (But, people download huge things all the time, so not completely infeasible either.)

          dandylyons@iosdev.spaceD noisytoot@berkeley.edu.plN 2 Replies Last reply
          0
          • bean@twoot.siteB bean@twoot.site

            @cwebber @faoluin @vv isn't vae a vvorm?

            vv@solarpunk.moeV This user is from outside of this forum
            vv@solarpunk.moeV This user is from outside of this forum
            vv@solarpunk.moe
            wrote sidst redigeret af
            #23

            @bean @cwebber @faoluin aren't vae 😛

            bean@twoot.siteB 1 Reply Last reply
            0
            • vv@solarpunk.moeV vv@solarpunk.moe

              @bean @cwebber @faoluin aren't vae 😛

              bean@twoot.siteB This user is from outside of this forum
              bean@twoot.siteB This user is from outside of this forum
              bean@twoot.site
              wrote sidst redigeret af
              #24

              @vv @cwebber @faoluin ah, excuse me, your vvnesses

              1 Reply Last reply
              0
              • cwebber@social.coopC cwebber@social.coop

                I am convinced we are on the verge of the first "AI agent worm". This looks like the closest hint of it, though it isn't it quite itself: an attack on a PR agent that got it to set up to install openclaw with full access on 4k machines https://grith.ai/blog/clinejection-when-your-ai-tool-installs-another

                But, the agents installed weren't given instructions to *do* anything yet.

                Soon they will be. And when they are, the havoc will be massive. Unlike traditional worms, where you're looking for the typically byte-for-byte identical worm embedded in the system, an agent worm can do different, nondeterministic things on every install, and carry out a global action.

                I suspect we're months away from seeing the first agent worm, *if* that. There may already be some happening right now in FOSS projects, undetected.

                johnbolton1122@mstdn.partyJ This user is from outside of this forum
                johnbolton1122@mstdn.partyJ This user is from outside of this forum
                johnbolton1122@mstdn.party
                wrote sidst redigeret af
                #25

                @cwebber Looking for a smarter way to earn online?
                This complete system shows you how to build income step by step — even if you’re a beginner.
                ✔ Easy to follow
                ✔ No technical skills required
                ✔ Limited time special price
                📩 Message us for full details.

                https://site-ylhjjre3i.godaddysites.com/

                For more details :

                https://www.facebook.com/share/1F1L47AFFe/

                1 Reply Last reply
                0
                • cwebber@social.coopC cwebber@social.coop

                  I am convinced we are on the verge of the first "AI agent worm". This looks like the closest hint of it, though it isn't it quite itself: an attack on a PR agent that got it to set up to install openclaw with full access on 4k machines https://grith.ai/blog/clinejection-when-your-ai-tool-installs-another

                  But, the agents installed weren't given instructions to *do* anything yet.

                  Soon they will be. And when they are, the havoc will be massive. Unlike traditional worms, where you're looking for the typically byte-for-byte identical worm embedded in the system, an agent worm can do different, nondeterministic things on every install, and carry out a global action.

                  I suspect we're months away from seeing the first agent worm, *if* that. There may already be some happening right now in FOSS projects, undetected.

                  krafttea@mastodon.socialK This user is from outside of this forum
                  krafttea@mastodon.socialK This user is from outside of this forum
                  krafttea@mastodon.social
                  wrote sidst redigeret af
                  #26

                  @cwebber I'm convinced it will be an AI agentic worm... because somehow people aren't allowed to use the word "agent" in the US ever since AI and now everything is agentic.

                  Agentic is the new idiotic.

                  1 Reply Last reply
                  0
                  • mcc@mastodon.socialM mcc@mastodon.social

                    @cwebber meanwhile people I talk to are like "wait why do you want guarantees your open source supply chain doesn't have LLM-sourced code in it. it has literally never occurred to me that this would be a thing someone would desire"

                    dandylyons@iosdev.spaceD This user is from outside of this forum
                    dandylyons@iosdev.spaceD This user is from outside of this forum
                    dandylyons@iosdev.space
                    wrote sidst redigeret af
                    #27

                    @mcc @cwebber

                    I think there is a valuable distinction between LLM-sourced code and LLM tool calls. Both are potentially problematic but have different threat vectors.

                    LLM-sourced code is a non-deterministic system writing deterministic code. We can still code review it.

                    LLM tool calls is a non-deterministic system taking non-deterministic actions via deterministic tools. This can’t be code reviewed and must be sandboxed.

                    mcc@mastodon.socialM 1 Reply Last reply
                    0
                    • cwebber@social.coopC cwebber@social.coop

                      @vv Yeah. I mean, local models *might* be able to pull this off but right now Claude is the most likely candidate, it's the most capable. But even then, the most capable open model that is capable of doing such damage on its own is somewhere around a gigabyte, not a small download.

                      (But, people download huge things all the time, so not completely infeasible either.)

                      dandylyons@iosdev.spaceD This user is from outside of this forum
                      dandylyons@iosdev.spaceD This user is from outside of this forum
                      dandylyons@iosdev.space
                      wrote sidst redigeret af
                      #28

                      @cwebber @vv If a local model is calling tools then it is still vulnerable to prompt injection.

                      vv@solarpunk.moeV 1 Reply Last reply
                      0
                      • cwebber@social.coopC cwebber@social.coop

                        I am convinced we are on the verge of the first "AI agent worm". This looks like the closest hint of it, though it isn't it quite itself: an attack on a PR agent that got it to set up to install openclaw with full access on 4k machines https://grith.ai/blog/clinejection-when-your-ai-tool-installs-another

                        But, the agents installed weren't given instructions to *do* anything yet.

                        Soon they will be. And when they are, the havoc will be massive. Unlike traditional worms, where you're looking for the typically byte-for-byte identical worm embedded in the system, an agent worm can do different, nondeterministic things on every install, and carry out a global action.

                        I suspect we're months away from seeing the first agent worm, *if* that. There may already be some happening right now in FOSS projects, undetected.

                        raymaccarthy@mastodon.ieR This user is from outside of this forum
                        raymaccarthy@mastodon.ieR This user is from outside of this forum
                        raymaccarthy@mastodon.ie
                        wrote sidst redigeret af
                        #29

                        @cwebber
                        The Shockwave Rider, John Brunner, 1975
                        https://en.wikipedia.org/wiki/The_Shockwave_Rider

                        IMO better than Alan Toffler's Future Shock (which is wrong, see 19th C. or early 20th.) because it's entertaining and not pretentious. Inspired by Future Shock.

                        https://en.wikipedia.org/wiki/Future_Shock#Future_shock

                        1 Reply Last reply
                        0
                        • cwebber@social.coopC cwebber@social.coop

                          I am convinced we are on the verge of the first "AI agent worm". This looks like the closest hint of it, though it isn't it quite itself: an attack on a PR agent that got it to set up to install openclaw with full access on 4k machines https://grith.ai/blog/clinejection-when-your-ai-tool-installs-another

                          But, the agents installed weren't given instructions to *do* anything yet.

                          Soon they will be. And when they are, the havoc will be massive. Unlike traditional worms, where you're looking for the typically byte-for-byte identical worm embedded in the system, an agent worm can do different, nondeterministic things on every install, and carry out a global action.

                          I suspect we're months away from seeing the first agent worm, *if* that. There may already be some happening right now in FOSS projects, undetected.

                          sylvielorxu@chaos.socialS This user is from outside of this forum
                          sylvielorxu@chaos.socialS This user is from outside of this forum
                          sylvielorxu@chaos.social
                          wrote sidst redigeret af
                          #30

                          @cwebber Having OpenClaw installed without my consent is some of the nastiest malware I've seen in a while 😞

                          1 Reply Last reply
                          1
                          0
                          • dandylyons@iosdev.spaceD dandylyons@iosdev.space

                            @mcc @cwebber

                            I think there is a valuable distinction between LLM-sourced code and LLM tool calls. Both are potentially problematic but have different threat vectors.

                            LLM-sourced code is a non-deterministic system writing deterministic code. We can still code review it.

                            LLM tool calls is a non-deterministic system taking non-deterministic actions via deterministic tools. This can’t be code reviewed and must be sandboxed.

                            mcc@mastodon.socialM This user is from outside of this forum
                            mcc@mastodon.socialM This user is from outside of this forum
                            mcc@mastodon.social
                            wrote sidst redigeret af
                            #31

                            @dandylyons @cwebber there are various ways I could respond to this post, but instead:

                            I'd like you to consider *the specific two posts in this thread you are responding to* and ask yourself if your comment is remotely relevant, or if you are simply pattern-matching on anti-LLM sentiment and responding with aggression/a thread derail.

                            dandylyons@iosdev.spaceD 1 Reply Last reply
                            0
                            • dandylyons@iosdev.spaceD dandylyons@iosdev.space

                              @cwebber @vv If a local model is calling tools then it is still vulnerable to prompt injection.

                              vv@solarpunk.moeV This user is from outside of this forum
                              vv@solarpunk.moeV This user is from outside of this forum
                              vv@solarpunk.moe
                              wrote sidst redigeret af
                              #32

                              @dandylyons @cwebber for sure, but it still takes some level of ability to perform these tasks effectively, which local models, especially anything that can run on a typical machine, struggle with

                              dandylyons@iosdev.spaceD 1 Reply Last reply
                              0
                              • vv@solarpunk.moeV vv@solarpunk.moe

                                @dandylyons @cwebber for sure, but it still takes some level of ability to perform these tasks effectively, which local models, especially anything that can run on a typical machine, struggle with

                                dandylyons@iosdev.spaceD This user is from outside of this forum
                                dandylyons@iosdev.spaceD This user is from outside of this forum
                                dandylyons@iosdev.space
                                wrote sidst redigeret af
                                #33

                                @vv @cwebber This is a good point. For now, local models are not proficient at tool calling. I don’t expect that to last for very long though.

                                1 Reply Last reply
                                0
                                • cwebber@social.coopC cwebber@social.coop

                                  I am convinced we are on the verge of the first "AI agent worm". This looks like the closest hint of it, though it isn't it quite itself: an attack on a PR agent that got it to set up to install openclaw with full access on 4k machines https://grith.ai/blog/clinejection-when-your-ai-tool-installs-another

                                  But, the agents installed weren't given instructions to *do* anything yet.

                                  Soon they will be. And when they are, the havoc will be massive. Unlike traditional worms, where you're looking for the typically byte-for-byte identical worm embedded in the system, an agent worm can do different, nondeterministic things on every install, and carry out a global action.

                                  I suspect we're months away from seeing the first agent worm, *if* that. There may already be some happening right now in FOSS projects, undetected.

                                  reiddragon@fedi.catto.gardenR This user is from outside of this forum
                                  reiddragon@fedi.catto.gardenR This user is from outside of this forum
                                  reiddragon@fedi.catto.garden
                                  wrote sidst redigeret af
                                  #34
                                  @cwebber In today's episode of "We build the Torment Nexus from the hit novel 'Don't build the Torment Nexus'"...
                                  1 Reply Last reply
                                  0
                                  • mcc@mastodon.socialM mcc@mastodon.social

                                    @dandylyons @cwebber there are various ways I could respond to this post, but instead:

                                    I'd like you to consider *the specific two posts in this thread you are responding to* and ask yourself if your comment is remotely relevant, or if you are simply pattern-matching on anti-LLM sentiment and responding with aggression/a thread derail.

                                    dandylyons@iosdev.spaceD This user is from outside of this forum
                                    dandylyons@iosdev.spaceD This user is from outside of this forum
                                    dandylyons@iosdev.space
                                    wrote sidst redigeret af
                                    #35

                                    @mcc @cwebber The original post was all about an LLM taking non-deterministic shell level actions at runtime. And you conflated that with deterministic code written by an LLM.

                                    What I wrote is very relevant.

                                    mcc@mastodon.socialM 1 Reply Last reply
                                    0
                                    • cwebber@social.coopC cwebber@social.coop

                                      I know some people are thinking "well pulling off this kind of thing, it would have to be controlled with intent of a human actor"

                                      It doesn't have to be.

                                      1. A human could *kick off* such a process, and then it runs away from them.
                                      2. It wouldn't even require a specific prompt to kick off a worm. There's enough scifi out there for this to be something any one of the barely-monitored openclaw agents could determine it should do.

                                      Whether it's kicked off by a human explicitly or a stray agent, it doesn't require "intentionality". Biological viruses don't have interiority / intentionality, and yet are major threats that reproduce and adapt.

                                      arnebab@rollenspiel.socialA This user is from outside of this forum
                                      arnebab@rollenspiel.socialA This user is from outside of this forum
                                      arnebab@rollenspiel.social
                                      wrote sidst redigeret af
                                      #36

                                      @cwebber According to #Shadowrun the crash virus is still three years away.

                                      https://shadowrun.fandom.com/wiki/Crash_Virus_of_2029

                                      "Fun" fact: In Shadowrun the Crash Virus learned to kill humans who connected their brains to the net. It was the start of lethal internet input.

                                      1 Reply Last reply
                                      0
                                      • cwebber@social.coopC cwebber@social.coop

                                        I am convinced we are on the verge of the first "AI agent worm". This looks like the closest hint of it, though it isn't it quite itself: an attack on a PR agent that got it to set up to install openclaw with full access on 4k machines https://grith.ai/blog/clinejection-when-your-ai-tool-installs-another

                                        But, the agents installed weren't given instructions to *do* anything yet.

                                        Soon they will be. And when they are, the havoc will be massive. Unlike traditional worms, where you're looking for the typically byte-for-byte identical worm embedded in the system, an agent worm can do different, nondeterministic things on every install, and carry out a global action.

                                        I suspect we're months away from seeing the first agent worm, *if* that. There may already be some happening right now in FOSS projects, undetected.

                                        aronia@tech.lgbtA This user is from outside of this forum
                                        aronia@tech.lgbtA This user is from outside of this forum
                                        aronia@tech.lgbt
                                        wrote sidst redigeret af
                                        #37

                                        @cwebber

                                        The postinstall script installs a legitimate, non-malicious package (OpenClaw). There is no malware to detect.

                                        i beg to differ

                                        bonzoesc@m.bonzoesc.netB 1 Reply Last reply
                                        0
                                        • dandylyons@iosdev.spaceD dandylyons@iosdev.space

                                          @mcc @cwebber The original post was all about an LLM taking non-deterministic shell level actions at runtime. And you conflated that with deterministic code written by an LLM.

                                          What I wrote is very relevant.

                                          mcc@mastodon.socialM This user is from outside of this forum
                                          mcc@mastodon.socialM This user is from outside of this forum
                                          mcc@mastodon.social
                                          wrote sidst redigeret af
                                          #38

                                          @dandylyons @cwebber it is about an attack based on covertly deploying LLM development tools, with the possible intent of later using them to leverage a second stage attack. If the LLM development tools were already installed, installing openclaw would not have been necessary and the attack could have worked a different way. We are discussing a situation where *the developer of a piece of software I use merely having LLM tools on their computer represents a risk to me*

                                          cwebber@social.coopC mcc@mastodon.socialM 2 Replies Last reply
                                          0
                                          Svar
                                          • Svar som emne
                                          Login for at svare
                                          • Ældste til nyeste
                                          • Nyeste til ældste
                                          • Most Votes


                                          • Log ind

                                          • Har du ikke en konto? Tilmeld

                                          • Login or register to search.
                                          Powered by NodeBB Contributors
                                          Graciously hosted by data.coop
                                          • First post
                                            Last post
                                          0
                                          • Hjem
                                          • Seneste
                                          • Etiketter
                                          • Populære
                                          • Verden
                                          • Bruger
                                          • Grupper