People shouldn't be *abused*, ever. If people are *shilling* AI and trying to force it on others, they might deserve some amount of shame and disapprobation. But nobody deserves abuse.
josh@social.joshtriplett.org
@josh@social.joshtriplett.org
Indlæg
-
👀 … https://sfconservancy.org/blog/2026/apr/15/eternal-november-generative-ai-llm/ …my colleague Denver Gingerich writes: newcomers' extensive reliance on LLM-backed generative AI is comparable to the Eternal September onslaught to USENET in 1993. -
👀 … https://sfconservancy.org/blog/2026/apr/15/eternal-november-generative-ai-llm/ …my colleague Denver Gingerich writes: newcomers' extensive reliance on LLM-backed generative AI is comparable to the Eternal September onslaught to USENET in 1993.> This implies no humans are doing code review. If it's crap code then it goes nowhere and collapse is avoided.
No, it implies no humans *without the aid of LLMs* are reviewing *how easy it would be to maintain without LLMs*. And that's an easy state to get into.
I think the "in between" outcome seems much more likely to me than it does to you: projects can limp along for a long time, and be popular enough to discourage competition or hold onto users for a while.
Diseases that are contagious before people are symptomatic are especially hazardous. LLM-written technical debt takes time to become symptomatic. The epidemic is time-delayed from the initial outbreak, and exponentials are hard to see from the middle. -
👀 … https://sfconservancy.org/blog/2026/apr/15/eternal-november-generative-ai-llm/ …my colleague Denver Gingerich writes: newcomers' extensive reliance on LLM-backed generative AI is comparable to the Eternal September onslaught to USENET in 1993.And the problem isn't just *new projects* that are LLM-written, it's the LLM-cordyceps taking over the bodies of existing projects and driving out developers who want to work with humans and don't have the complexity-and-debt-and-NIH tolerance of LLMs. (And solving that isn't as simple as forking, because it's possible one or both groups don't have the critical mass that they would have had together.) -
👀 … https://sfconservancy.org/blog/2026/apr/15/eternal-november-generative-ai-llm/ …my colleague Denver Gingerich writes: newcomers' extensive reliance on LLM-backed generative AI is comparable to the Eternal September onslaught to USENET in 1993.I'm suggesting, as the article we're replying to points out, that it's now easier for people to go "eh, I don't need FOSS collaborators, I have LLMs and look how many lines of code I produce per day!". And conversely, projects developed heavily by LLM will not be welcoming environments to people who don't want to work with LLMs. This creates silos. -
👀 … https://sfconservancy.org/blog/2026/apr/15/eternal-november-generative-ai-llm/ …my colleague Denver Gingerich writes: newcomers' extensive reliance on LLM-backed generative AI is comparable to the Eternal September onslaught to USENET in 1993.> You can prevent it by asking LLM tho add comments and check those comments
You really can't; it is not anywhere close to that simple. The problem isn't just line-level, it's (among many other things) systemic design complexity, tolerance for technical debt, unbounded (except by token budget) capacity to duplicate or reinvent rather than reuse, none of the programmer's virtue of "laziness", and a substantial multiplier on the hubris.
-
👀 … https://sfconservancy.org/blog/2026/apr/15/eternal-november-generative-ai-llm/ …my colleague Denver Gingerich writes: newcomers' extensive reliance on LLM-backed generative AI is comparable to the Eternal September onslaught to USENET in 1993.Also, the "drowning in slop" problems have real-world social consequences too! Some projects are having to go closer to "we don't take patches from people we don't know", and that's damaging the ability to do drive-by or one-off contributions, or to onboard new contributors. That feels like the prologue of ecosystem collapse. -
👀 … https://sfconservancy.org/blog/2026/apr/15/eternal-november-generative-ai-llm/ …my colleague Denver Gingerich writes: newcomers' extensive reliance on LLM-backed generative AI is comparable to the Eternal September onslaught to USENET in 1993.I think software is not at all immune, in the sense that just as LLMs can produce grammatically correct sentences that make no sense and have no factual basis, they can produce code that *compiles* but is utterly alien to what any sensible human with taste would write. -
👀 … https://sfconservancy.org/blog/2026/apr/15/eternal-november-generative-ai-llm/ …my colleague Denver Gingerich writes: newcomers' extensive reliance on LLM-backed generative AI is comparable to the Eternal September onslaught to USENET in 1993.Leaving aside for a moment the issue that (B) can leave maintainers drowning in slop...
There is a massive game-theoretic problem here. Employers are forcing some developers to deal with LLMs. Some people of their own volition are excited about LLMs. Some people want nothing to do with LLMs. People who heavily use and rely on LLMs have different standards for acceptable complexity and maintainability. LLMs encourage people to work more in silos without collaboration and use LLMs instead of collaborators, and that serves LLM purveyors. It's much easier to collaborate with "You're absolutely right!". Codebases and ecosystems and communities diverge. -
👀 … https://sfconservancy.org/blog/2026/apr/15/eternal-november-generative-ai-llm/ …my colleague Denver Gingerich writes: newcomers' extensive reliance on LLM-backed generative AI is comparable to the Eternal September onslaught to USENET in 1993.For what it's worth, if your blog post had come across saying what you are *currently* saying on Fedi, I would be much more enthusiastic and appreciative of it, and I suspect others would be too. -
👀 … https://sfconservancy.org/blog/2026/apr/15/eternal-november-generative-ai-llm/ …my colleague Denver Gingerich writes: newcomers' extensive reliance on LLM-backed generative AI is comparable to the Eternal September onslaught to USENET in 1993.If you're going to post a different version, please post a diff somewhere, to help make sure people are talking about the same thing. -
👀 … https://sfconservancy.org/blog/2026/apr/15/eternal-november-generative-ai-llm/ …my colleague Denver Gingerich writes: newcomers' extensive reliance on LLM-backed generative AI is comparable to the Eternal September onslaught to USENET in 1993.I get that, and *that* is a much more inviting message. And I appreciate the *sentiment* in things like "new wave of people who are excited about our craft, and how they can improve personal autonomy for themselves and others". But that's not how much of the post came across to many.
If the post had said, for instance:
"LLMs have made basic software development capabilities available to people who could never write software before, and those people may not yet be aware of the norms of the broader Open Source software community or the issues of maintainability and technical debt. Even though we're dealing with a lot of slop, we should avoid driving potential new developers off with abuse before they have a chance to learn. We were all newbies once, and collaboration and maintenance are skills that take time to learn."
Something like that would have been very different. But what you *said* was, for instance, "adapt FOSS projects to improve pro-AI contributor onboarding", rather than "figure out how to reach out to people who are currently using AI and see if they want to join broader communities that may not welcome those tools". You said "seriously consider cautiously and carefully incorporating their workflows with ours", which is advocacy for those *workflows*, not just for being understanding towards the *potential new developers*. -
👀 … https://sfconservancy.org/blog/2026/apr/15/eternal-november-generative-ai-llm/ …my colleague Denver Gingerich writes: newcomers' extensive reliance on LLM-backed generative AI is comparable to the Eternal September onslaught to USENET in 1993.There are more projects out there than the Linux kernel. Smaller projects with fewer maintainers can more quickly get overwhelmed. And when you have a smaller project, or an area of a project, with only a few maintainers, it only takes one or two LLM users and a pile of tokens to turn that area into *primarily* LLM-written material or introduce way too much complexity.
And to be clear, I'm not arguing against the careful use of (for instance) LLM security analyses, by people who want to run those *and filter the results*. But nobody should be forced to deal with LLM output who didn't sign up for it, and that includes LLM-written patches and LLM-written mails. -
👀 … https://sfconservancy.org/blog/2026/apr/15/eternal-november-generative-ai-llm/ …my colleague Denver Gingerich writes: newcomers' extensive reliance on LLM-backed generative AI is comparable to the Eternal September onslaught to USENET in 1993."Words that aren't there" like this?
> Historically, software freedom has has typically necessitated interacting with others
Suggesting that this is merely "historically"?
> more easily with LLM-backed generative AI coding tools (and the ease with which changes can be made generally) there is less of a natural tendency for people to work with existing FOSS communities. And we should be ok with that!
We should be okay with that? We should not treat it as an *existential threat* and respond accordingly? Those are the words that aren't there? -
👀 … https://sfconservancy.org/blog/2026/apr/15/eternal-november-generative-ai-llm/ …my colleague Denver Gingerich writes: newcomers' extensive reliance on LLM-backed generative AI is comparable to the Eternal September onslaught to USENET in 1993.One of *many* arguments against: codebases substantially contributed to by LLMs will develop a tolerance for complexity that is not conducive to being maintained by anything *other* than an LLM. -
👀 … https://sfconservancy.org/blog/2026/apr/15/eternal-november-generative-ai-llm/ …my colleague Denver Gingerich writes: newcomers' extensive reliance on LLM-backed generative AI is comparable to the Eternal September onslaught to USENET in 1993.My first paid software development was in VBA. I did some of my first FOSS work and experimentation on a proprietary system (Windows). I benefited heavily from MinGW/MSYS. I appreciated having bridges available into the Open Source world; I would have had a harder time if they weren't.
But I also appreciated that, when I was doing so, I had access to plenty of guidance, and knew that I was on the starting point of a road, and not done yet. -
👀 … https://sfconservancy.org/blog/2026/apr/15/eternal-november-generative-ai-llm/ …my colleague Denver Gingerich writes: newcomers' extensive reliance on LLM-backed generative AI is comparable to the Eternal September onslaught to USENET in 1993.Talking with them is good. Helping to educate them is good. Making it sound as if what they are doing is okay is *not*.
There is a big difference between offering an olive branch to people who *might* be productive contributors in the *future*, and telling them that what they're doing *now* is okay.
The best AI policy remains "do not contribute any LLM-written content, ever". You have published a post that makes it easier for people who oppose such policies to cite your "olive branch" when arguing against it, and it is not obvious from your post that you do not want that to happen.
I don't want to see people *abused* for using LLMs. I do want them to understand that what they're doing is not okay and not welcome and not a positive contribution.