I am convinced we are on the verge of the first "AI agent worm".
-
Here's another way to put it: if those using AI agents to codegen / review are the *initialization vectors*, we now also have a significant computing public health reason to discourage the use of these tools.
Not that I think it will. But I'm convinced this is how patient zero will happen.
@cwebber just today our org had a big "how to set up coding with agents" preso and in the chat someone's like 'here's how to connect your agents with windows credential store or the macos keychain" and I all but wept
-
I can’t help calling a small vignette, I think from snow crash, that describes a world where nano bots are constantly waging war. In other words, that world was confused with miniature robots, constantly buying to take over systems and that it was just kind of like normal viruses and bugs versus the organisms they were trying to take over
@GhostOnTheHalfShell @cwebber Diamond Age, I think? (Part of the early worldbuilding, with house shields and such)
-
@GhostOnTheHalfShell @cwebber Diamond Age, I think? (Part of the early worldbuilding, with house shields and such)
-
Here's another way to put it: if those using AI agents to codegen / review are the *initialization vectors*, we now also have a significant computing public health reason to discourage the use of these tools.
Not that I think it will. But I'm convinced this is how patient zero will happen.
I know some people are thinking "well pulling off this kind of thing, it would have to be controlled with intent of a human actor"
It doesn't have to be.
1. A human could *kick off* such a process, and then it runs away from them.
2. It wouldn't even require a specific prompt to kick off a worm. There's enough scifi out there for this to be something any one of the barely-monitored openclaw agents could determine it should do.Whether it's kicked off by a human explicitly or a stray agent, it doesn't require "intentionality". Biological viruses don't have interiority / intentionality, and yet are major threats that reproduce and adapt.
-
-
I know some people are thinking "well pulling off this kind of thing, it would have to be controlled with intent of a human actor"
It doesn't have to be.
1. A human could *kick off* such a process, and then it runs away from them.
2. It wouldn't even require a specific prompt to kick off a worm. There's enough scifi out there for this to be something any one of the barely-monitored openclaw agents could determine it should do.Whether it's kicked off by a human explicitly or a stray agent, it doesn't require "intentionality". Biological viruses don't have interiority / intentionality, and yet are major threats that reproduce and adapt.
@cwebber what i think is interesting about this is the potential for it to get so out of control that they have to pull the plug on the entire agent service
-
@cwebber what i think is interesting about this is the potential for it to get so out of control that they have to pull the plug on the entire agent service
@vv Yeah. I mean, local models *might* be able to pull this off but right now Claude is the most likely candidate, it's the most capable. But even then, the most capable open model that is capable of doing such damage on its own is somewhere around a gigabyte, not a small download.
(But, people download huge things all the time, so not completely infeasible either.)
-
-
-
I am convinced we are on the verge of the first "AI agent worm". This looks like the closest hint of it, though it isn't it quite itself: an attack on a PR agent that got it to set up to install openclaw with full access on 4k machines https://grith.ai/blog/clinejection-when-your-ai-tool-installs-another
But, the agents installed weren't given instructions to *do* anything yet.
Soon they will be. And when they are, the havoc will be massive. Unlike traditional worms, where you're looking for the typically byte-for-byte identical worm embedded in the system, an agent worm can do different, nondeterministic things on every install, and carry out a global action.
I suspect we're months away from seeing the first agent worm, *if* that. There may already be some happening right now in FOSS projects, undetected.
@cwebber Looking for a smarter way to earn online?
This complete system shows you how to build income step by step — even if you’re a beginner.
Easy to follow
No technical skills required
Limited time special price
Message us for full details.https://site-ylhjjre3i.godaddysites.com/
For more details :
-
I am convinced we are on the verge of the first "AI agent worm". This looks like the closest hint of it, though it isn't it quite itself: an attack on a PR agent that got it to set up to install openclaw with full access on 4k machines https://grith.ai/blog/clinejection-when-your-ai-tool-installs-another
But, the agents installed weren't given instructions to *do* anything yet.
Soon they will be. And when they are, the havoc will be massive. Unlike traditional worms, where you're looking for the typically byte-for-byte identical worm embedded in the system, an agent worm can do different, nondeterministic things on every install, and carry out a global action.
I suspect we're months away from seeing the first agent worm, *if* that. There may already be some happening right now in FOSS projects, undetected.
@cwebber I'm convinced it will be an AI agentic worm... because somehow people aren't allowed to use the word "agent" in the US ever since AI and now everything is agentic.
Agentic is the new idiotic.
-
@cwebber meanwhile people I talk to are like "wait why do you want guarantees your open source supply chain doesn't have LLM-sourced code in it. it has literally never occurred to me that this would be a thing someone would desire"
I think there is a valuable distinction between LLM-sourced code and LLM tool calls. Both are potentially problematic but have different threat vectors.
LLM-sourced code is a non-deterministic system writing deterministic code. We can still code review it.
LLM tool calls is a non-deterministic system taking non-deterministic actions via deterministic tools. This can’t be code reviewed and must be sandboxed.
-
@vv Yeah. I mean, local models *might* be able to pull this off but right now Claude is the most likely candidate, it's the most capable. But even then, the most capable open model that is capable of doing such damage on its own is somewhere around a gigabyte, not a small download.
(But, people download huge things all the time, so not completely infeasible either.)
-
I am convinced we are on the verge of the first "AI agent worm". This looks like the closest hint of it, though it isn't it quite itself: an attack on a PR agent that got it to set up to install openclaw with full access on 4k machines https://grith.ai/blog/clinejection-when-your-ai-tool-installs-another
But, the agents installed weren't given instructions to *do* anything yet.
Soon they will be. And when they are, the havoc will be massive. Unlike traditional worms, where you're looking for the typically byte-for-byte identical worm embedded in the system, an agent worm can do different, nondeterministic things on every install, and carry out a global action.
I suspect we're months away from seeing the first agent worm, *if* that. There may already be some happening right now in FOSS projects, undetected.
@cwebber
The Shockwave Rider, John Brunner, 1975
https://en.wikipedia.org/wiki/The_Shockwave_RiderIMO better than Alan Toffler's Future Shock (which is wrong, see 19th C. or early 20th.) because it's entertaining and not pretentious. Inspired by Future Shock.
-
I am convinced we are on the verge of the first "AI agent worm". This looks like the closest hint of it, though it isn't it quite itself: an attack on a PR agent that got it to set up to install openclaw with full access on 4k machines https://grith.ai/blog/clinejection-when-your-ai-tool-installs-another
But, the agents installed weren't given instructions to *do* anything yet.
Soon they will be. And when they are, the havoc will be massive. Unlike traditional worms, where you're looking for the typically byte-for-byte identical worm embedded in the system, an agent worm can do different, nondeterministic things on every install, and carry out a global action.
I suspect we're months away from seeing the first agent worm, *if* that. There may already be some happening right now in FOSS projects, undetected.
@cwebber Having OpenClaw installed without my consent is some of the nastiest malware I've seen in a while

-
I think there is a valuable distinction between LLM-sourced code and LLM tool calls. Both are potentially problematic but have different threat vectors.
LLM-sourced code is a non-deterministic system writing deterministic code. We can still code review it.
LLM tool calls is a non-deterministic system taking non-deterministic actions via deterministic tools. This can’t be code reviewed and must be sandboxed.
@dandylyons @cwebber there are various ways I could respond to this post, but instead:
I'd like you to consider *the specific two posts in this thread you are responding to* and ask yourself if your comment is remotely relevant, or if you are simply pattern-matching on anti-LLM sentiment and responding with aggression/a thread derail.
-
@dandylyons @cwebber for sure, but it still takes some level of ability to perform these tasks effectively, which local models, especially anything that can run on a typical machine, struggle with
-
@dandylyons @cwebber for sure, but it still takes some level of ability to perform these tasks effectively, which local models, especially anything that can run on a typical machine, struggle with
-
I am convinced we are on the verge of the first "AI agent worm". This looks like the closest hint of it, though it isn't it quite itself: an attack on a PR agent that got it to set up to install openclaw with full access on 4k machines https://grith.ai/blog/clinejection-when-your-ai-tool-installs-another
But, the agents installed weren't given instructions to *do* anything yet.
Soon they will be. And when they are, the havoc will be massive. Unlike traditional worms, where you're looking for the typically byte-for-byte identical worm embedded in the system, an agent worm can do different, nondeterministic things on every install, and carry out a global action.
I suspect we're months away from seeing the first agent worm, *if* that. There may already be some happening right now in FOSS projects, undetected.
@cwebber In today's episode of "We build the Torment Nexus from the hit novel 'Don't build the Torment Nexus'"... -
@dandylyons @cwebber there are various ways I could respond to this post, but instead:
I'd like you to consider *the specific two posts in this thread you are responding to* and ask yourself if your comment is remotely relevant, or if you are simply pattern-matching on anti-LLM sentiment and responding with aggression/a thread derail.
