I am convinced we are on the verge of the first "AI agent worm".
-
I am convinced we are on the verge of the first "AI agent worm". This looks like the closest hint of it, though it isn't it quite itself: an attack on a PR agent that got it to set up to install openclaw with full access on 4k machines https://grith.ai/blog/clinejection-when-your-ai-tool-installs-another
But, the agents installed weren't given instructions to *do* anything yet.
Soon they will be. And when they are, the havoc will be massive. Unlike traditional worms, where you're looking for the typically byte-for-byte identical worm embedded in the system, an agent worm can do different, nondeterministic things on every install, and carry out a global action.
I suspect we're months away from seeing the first agent worm, *if* that. There may already be some happening right now in FOSS projects, undetected.
@cwebber "Ha ha!"
-
J jwcph@helvede.net shared this topic
-
I know some people are thinking "well pulling off this kind of thing, it would have to be controlled with intent of a human actor"
It doesn't have to be.
1. A human could *kick off* such a process, and then it runs away from them.
2. It wouldn't even require a specific prompt to kick off a worm. There's enough scifi out there for this to be something any one of the barely-monitored openclaw agents could determine it should do.Whether it's kicked off by a human explicitly or a stray agent, it doesn't require "intentionality". Biological viruses don't have interiority / intentionality, and yet are major threats that reproduce and adapt.
@cwebber so I'm following this right, it sounds like the project or its maintainers don't even necessarily need to even be using LLM tools, the attack pattern simply targets contributors who are using LLM development tools? and so all that is really needed is for the payload to be subtle and the maintainer to be sufficiently overwhelmed (say, by an endless fire hose of LLM-generated liquid shit slop pull requests)?
-
@cwebber so I'm following this right, it sounds like the project or its maintainers don't even necessarily need to even be using LLM tools, the attack pattern simply targets contributors who are using LLM development tools? and so all that is really needed is for the payload to be subtle and the maintainer to be sufficiently overwhelmed (say, by an endless fire hose of LLM-generated liquid shit slop pull requests)?
@aeva Yes and it's worse than that: the maintainer doesn't even need to be running these tools on their computer. The attack I linked had Claude's independently-running REVIEW BOT on GitHub commit it via injection attack
-
I am convinced we are on the verge of the first "AI agent worm". This looks like the closest hint of it, though it isn't it quite itself: an attack on a PR agent that got it to set up to install openclaw with full access on 4k machines https://grith.ai/blog/clinejection-when-your-ai-tool-installs-another
But, the agents installed weren't given instructions to *do* anything yet.
Soon they will be. And when they are, the havoc will be massive. Unlike traditional worms, where you're looking for the typically byte-for-byte identical worm embedded in the system, an agent worm can do different, nondeterministic things on every install, and carry out a global action.
I suspect we're months away from seeing the first agent worm, *if* that. There may already be some happening right now in FOSS projects, undetected.
@cwebber This is making me more worried about Vorta's Claude workflows.

Backup software that handles highly sensitive data would be a prime target for such a supply chain attack. -
@aeva Yes and it's worse than that: the maintainer doesn't even need to be running these tools on their computer. The attack I linked had Claude's independently-running REVIEW BOT on GitHub commit it via injection attack
@aeva But once that was done, the agent was set up to install on users' devices
So the initial attack vector can literally be "Any AI agent in your stack whatsoever getting tricked" as a pathway for infecting computers everywhere
-
@cwebber This is making me more worried about Vorta's Claude workflows.

Backup software that handles highly sensitive data would be a prime target for such a supply chain attack.@csepp Don't forget about KeePassXC. I dunno if they kept going after this "initial test" or not https://www.reddit.com/r/KeePass/comments/1lnvw6q/keepassxc_codebases_jump_into_generative_ai/
-
@csepp Don't forget about KeePassXC. I dunno if they kept going after this "initial test" or not https://www.reddit.com/r/KeePass/comments/1lnvw6q/keepassxc_codebases_jump_into_generative_ai/
@csepp And don't forget about LITERALLY MOZILLA FIREFOX
-
@aeva But once that was done, the agent was set up to install on users' devices
So the initial attack vector can literally be "Any AI agent in your stack whatsoever getting tricked" as a pathway for infecting computers everywhere
@cwebber apropos of nothing, is pottery still a big deal for humans? i was thinking this morning that pottery might be a nice career change for me.
-
@mcc exactly put
@cwebber @mcc @dandylyons
not forgetting the second post - the one that appropriately begins by "meanwhile" - wasn't conflating anything, it was contrasting the gravity of the situation with the surreallistically ingenuous state of mind of some people. -
@csepp And don't forget about LITERALLY MOZILLA FIREFOX
@cwebber Oh shit, I rely on all three of these.
Welppppp. I guess I'll have to start looking into alternative password managers. -
I am convinced we are on the verge of the first "AI agent worm". This looks like the closest hint of it, though it isn't it quite itself: an attack on a PR agent that got it to set up to install openclaw with full access on 4k machines https://grith.ai/blog/clinejection-when-your-ai-tool-installs-another
But, the agents installed weren't given instructions to *do* anything yet.
Soon they will be. And when they are, the havoc will be massive. Unlike traditional worms, where you're looking for the typically byte-for-byte identical worm embedded in the system, an agent worm can do different, nondeterministic things on every install, and carry out a global action.
I suspect we're months away from seeing the first agent worm, *if* that. There may already be some happening right now in FOSS projects, undetected.
Ah, the infinite papirclips scenario.
-
@cwebber Oh shit, I rely on all three of these.
Welppppp. I guess I'll have to start looking into alternative password managers. -
-
@mttaggart @mcc @cwebber Do we know what is being used for inference? At this point in time it's unlikely that they can use a self-hosted model, so there will be network calls.
-
-
@Canageek @csepp There was a recent thing, I can't find it now, where Mozilla added a commit to their agents thing to say "don't explicitly say when AI agents helped author a commit anymore", probably because they were getting community pushback
as you may have guessed, it got some community pushback
-
@cwebber apropos of nothing, is pottery still a big deal for humans? i was thinking this morning that pottery might be a nice career change for me.
-
@Canageek @csepp There was a recent thing, I can't find it now, where Mozilla added a commit to their agents thing to say "don't explicitly say when AI agents helped author a commit anymore", probably because they were getting community pushback
as you may have guessed, it got some community pushback
-
@mttaggart @mcc @cwebber Do we know what is being used for inference? At this point in time it's unlikely that they can use a self-hosted model, so there will be network calls.
@dvshkn @mcc @cwebber So the trick here is if you install OpenClaw in secret on a user's machine who isn't checking carefully, you might hide easily in network traffic. Use of tools like Claude Code would make the same API calls, which is likely for users who would be targeted with these attacks.
The real insane part is if multiple instance of OpenClaw were running on the same machine, so not even the process name looked suspicious. But of course process names are a poor indicator and can be changed.
-