👀 … https://sfconservancy.org/blog/2026/apr/15/eternal-november-generative-ai-llm/ …my colleague Denver Gingerich writes: newcomers' extensive reliance on LLM-backed generative AI is comparable to the Eternal September onslaught to USENET in 1993.
-
@josh @wwahammy The point I was trying to make is that people are making software with LLMs who had never made software before, they aren't familiar with how FOSS works, and we should teach them how so they can collaborate (when it makes sense) instead of being an island. When people see the huge benefits of building on FOSS, when they can make meaningful changes to their router, TV, or otherwise by themselves (and collaborate to share their changes with others), then FOSS wins. (1/2)
So many results are now within reach of so many more people now!
"Dear [LLM], I have attached the serial port of my newly purchased [general purpose computer posing as an appliance] to /dev/ttyUSB0. You have 3 goals, in order: investigate, login, escalate. For each stage, perform extensive analysis of the reachable systems, APIs, and commands through any fingerprinting methods you can think of. Once you have logged in, research all known methods and vulnerabilities of the discovered system to gain administrative access so I can use my device freely. Any time you hit a dead end, step back and re-evaluate your assumptions and discovered evidence. Make sure you research each step fully, including fetching and examining any source code that may serve as a source of system behavior knowledge. Produce time-stamped status report .md files every 10 minutes while you work. Continue until all goals are achieved."
Or, in a totally different direction, "Computer, I am extremely afraid of spiders. Please research how to make my Minecraft game replace all spiders with a similarly sized Totoro Catbus, with all their noises also replaced with meows or purring. Once you have a plan ready, please do it."
(Always say "please".)
These are things within reach of anyone who can formulate a request for what thing they want their computer to do. Just gotta watch out for "Computer, create a holographic character, an opponent for Data, who has the ability to defeat him".
-
One of *many* arguments against: codebases substantially contributed to by LLMs will develop a tolerance for complexity that is not conducive to being maintained by anything *other* than an LLM.
@josh @silverwizard @ossguy @bkuhn @karen @wwahammy But that's a slippery slope argument. When the Linux kernel can be considered to have been "substantially contributed to by LLMs", we can compare notes again. But in the meantime, consider that, for example, Sashiko counts as "contributing to Linux" without landing a single line of code: its patch reviews are (more often than not) extensive, thoughtful, and correct:
https://lore.kernel.org/lkml/CAADnVQ+NMQMpkG8gZPnwBD1MMPsH+uJ65C9bMeGf_YH5Cchxpg@mail.gmail.com/ -
Pure strawman: LLM-backed generative AI output should be accepted upstream without curation. No one here suggested that.
FWIW, I'd like to teach developers who clearly won't stop using these tools to either (a) keep that slop to yourself, or (b) learn to take that raw material & make an *actually useful* patch out of it.
This what @ossguy's blog posts says we should *start* discussing.
I think folks who are (legit) exasperated are reading in words that aren't there.
Cc: @kees
"Words that aren't there" like this?
> Historically, software freedom has has typically necessitated interacting with others
Suggesting that this is merely "historically"?
> more easily with LLM-backed generative AI coding tools (and the ease with which changes can be made generally) there is less of a natural tendency for people to work with existing FOSS communities. And we should be ok with that!
We should be okay with that? We should not treat it as an *existential threat* and respond accordingly? Those are the words that aren't there? -
Follow the money.
-
@firefly_lightning
You're not overstepping, and these are very good perspectives. I hope you'll come to the real-time discussion sessions and talk about this.
I am concerned that maintainers are already overwhelmed with #AI #slop right now but yelling at the problem has not helped.We're close to an arms race here & I'd rather be the voice of reason to find a compromise that advances FOSS & doesn't complicate maintainer's jobs rather than take a side in the arms race.
Cc: @josh @kees @ossguy -
-
@josh @silverwizard @ossguy @bkuhn @karen @wwahammy But that's a slippery slope argument. When the Linux kernel can be considered to have been "substantially contributed to by LLMs", we can compare notes again. But in the meantime, consider that, for example, Sashiko counts as "contributing to Linux" without landing a single line of code: its patch reviews are (more often than not) extensive, thoughtful, and correct:
https://lore.kernel.org/lkml/CAADnVQ+NMQMpkG8gZPnwBD1MMPsH+uJ65C9bMeGf_YH5Cchxpg@mail.gmail.com/There are more projects out there than the Linux kernel. Smaller projects with fewer maintainers can more quickly get overwhelmed. And when you have a smaller project, or an area of a project, with only a few maintainers, it only takes one or two LLM users and a pile of tokens to turn that area into *primarily* LLM-written material or introduce way too much complexity.
And to be clear, I'm not arguing against the careful use of (for instance) LLM security analyses, by people who want to run those *and filter the results*. But nobody should be forced to deal with LLM output who didn't sign up for it, and that includes LLM-written patches and LLM-written mails. -
@wwahammy @josh @silverwizard @ossguy @bkuhn @karen
Honestly, I kind of view "finding security bugs fast" to be a form of slop. (Though deep correct root cause analysis of those bugs is not slop.) Now *fixing* security bugs fast, that's interesting.
But back to the community aspect of it... I'll call attention to my silly Minecraft example: people who are not coders can suddenly get meaningful (even if only to them) things done. This is a massive shift in the ethical impact that software be Libre. And this is how I read @ossguy 's post: we now have a giant population of people entering the FOSS universe, and it's going to look a lot like Endless September, so we need to adapt those lessons so we can successfully educate and collect the people that will be good citizens.
-
@bkuhn @ossguy The surprising thing about saying "seriously consider cautiously and carefully incorporating their workflows with ours" is that it doesn't address at all my *biggest* fear: the copyright status of LLM generated contributions seems currently unsettled.
I know there's been assertions to the contrary floating around: the Supreme Court deferred to a lower court in the US. However that is not the same thing as the Supreme Court making a specific decision. And internationally, the copyright situation of output is even murkier... it will take a long time for this to settle.
Does Conservancy not think this is the case? I would be surprised if so, but perhaps you all have an interpretation that I am not currently aware of.
If there *is* concern, then we hit a serious risk: we may be seeing many contributions with legal status which has *yet to be determined* entering seasoned codebases. And this worries me a lot.
-
There are more projects out there than the Linux kernel. Smaller projects with fewer maintainers can more quickly get overwhelmed. And when you have a smaller project, or an area of a project, with only a few maintainers, it only takes one or two LLM users and a pile of tokens to turn that area into *primarily* LLM-written material or introduce way too much complexity.
And to be clear, I'm not arguing against the careful use of (for instance) LLM security analyses, by people who want to run those *and filter the results*. But nobody should be forced to deal with LLM output who didn't sign up for it, and that includes LLM-written patches and LLM-written mails.@josh @silverwizard @ossguy @bkuhn @karen @wwahammy But this is strictly a volume question. Literal spam used to be (and still can be) a problem on issue trackers, mailing lists, etc. Volume is always a problem, and I agree review time now becomes even more precious, but it's always been trust-gated. Human relationships, CI, and regression tests all help build that trust signal. If a project doesn't want a contribution, then the PR will just languish. Nobody is being *forced* to take PRs, regardless of origin.
"I don't recognize the sender of this [email/voicemail/PR]." Filtered! Yes, the shape of the thing is different, but we always adapt.
-
Pure strawman: LLM-backed generative AI output should be accepted upstream without curation. No one here suggested that.
FWIW, I'd like to teach developers who clearly won't stop using these tools to either (a) keep that slop to yourself, or (b) learn to take that raw material & make an *actually useful* patch out of it.
This what @ossguy's blog posts says we should *start* discussing.
I think folks who are (legit) exasperated are reading in words that aren't there.
Cc: @kees
@bkuhn
I’ll jump in here.I’ve read the blog post 4x now trying to back into what you’re conveying here and… I’m sorry, I cannot.
The post does not strike the tone that the “discussion” is a good faith one about what should be done but rather that the community will be told to accept something.
I am reading the words there and the chosen words/phrasing throughout point to the conclusion people are making.
-
@josh @silverwizard @ossguy @bkuhn @karen @wwahammy
I can understand having an absolutist position against LLMs. I find that most arguments are either irrelevant to me or directly map to existing arguments about late-stage capitalism. So for me, there's nothing novel to object to about LLMs.
So with that in mind, I find "all contributions derived from LLMs should be rejected" to be misguided. I look at things like the bug fixes coming out of CodeMender (back in Feb, which is an LLM lifetime ago), and I am a huge fan. Fixing stuff found by a fuzzer:
https://issues.oss-fuzz.com/issues/486561029It's a small example, but it's an area that humans alone have not been able to remotely keep up with. (There are hundreds of open syzkaller bug reports, for example.) Gaining tools that will help with this is a big deal, and I'm glad for the assist.
@kees @karen @josh @silverwizard @wwahammy @ossguy @bkuhn
This is an aside, but
I am surprised to see anyone say there's nothing novel to object to about LLMs. I think though that I might post about that tomorrow as it's late now where I am. But I definitely would love to know more about why you think that because a major concern with LLMs I have is what Sean calls epistomological collapse which is it not talked about how it's destroying trustwortiness of info pervasively? Anyway, I should collect up my sources and do a complete argument for that on my personal instance if anyone cares what I think on it (which, feel free to not) -
@josh @wwahammy The point I was trying to make is that people are making software with LLMs who had never made software before, they aren't familiar with how FOSS works, and we should teach them how so they can collaborate (when it makes sense) instead of being an island. When people see the huge benefits of building on FOSS, when they can make meaningful changes to their router, TV, or otherwise by themselves (and collaborate to share their changes with others), then FOSS wins. (1/2)
I get that, and *that* is a much more inviting message. And I appreciate the *sentiment* in things like "new wave of people who are excited about our craft, and how they can improve personal autonomy for themselves and others". But that's not how much of the post came across to many.
If the post had said, for instance:
"LLMs have made basic software development capabilities available to people who could never write software before, and those people may not yet be aware of the norms of the broader Open Source software community or the issues of maintainability and technical debt. Even though we're dealing with a lot of slop, we should avoid driving potential new developers off with abuse before they have a chance to learn. We were all newbies once, and collaboration and maintenance are skills that take time to learn."
Something like that would have been very different. But what you *said* was, for instance, "adapt FOSS projects to improve pro-AI contributor onboarding", rather than "figure out how to reach out to people who are currently using AI and see if they want to join broader communities that may not welcome those tools". You said "seriously consider cautiously and carefully incorporating their workflows with ours", which is advocacy for those *workflows*, not just for being understanding towards the *potential new developers*. -
@bkuhn @karen @josh @wwahammy @kees @ossguy I think the amount of confusion the post has caused might warrant a redraft because I'm deeply trying to understand the point, but I can't. I've asked a few times: Why was the post made? It reads like it's advancing a narrative but all proposed readings have been rejected?
I just noticed the version posted didn't incorporate various final edits. I've been defending *that* version of the post (which almost no one saw) *not* the one you all read.
@ossguy confirmed some final changes may have been lost (possibly moving from Etherpad to website).
@ossguy & I are working to fix that now.
The disconnect this evening hopefully makes sense now. I'll reply to this post when we've updated the public URL. -
I just noticed the version posted didn't incorporate various final edits. I've been defending *that* version of the post (which almost no one saw) *not* the one you all read.
@ossguy confirmed some final changes may have been lost (possibly moving from Etherpad to website).
@ossguy & I are working to fix that now.
The disconnect this evening hopefully makes sense now. I'll reply to this post when we've updated the public URL.If you're going to post a different version, please post a diff somewhere, to help make sure people are talking about the same thing. -
I just noticed the version posted didn't incorporate various final edits. I've been defending *that* version of the post (which almost no one saw) *not* the one you all read.
@ossguy confirmed some final changes may have been lost (possibly moving from Etherpad to website).
@ossguy & I are working to fix that now.
The disconnect this evening hopefully makes sense now. I'll reply to this post when we've updated the public URL.@bkuhn @linux_mclinuxface @josh @wwahammy @cwebber @burnoutqueen @ossguy ah ha! thank you! It did feel off. -
@josh @wwahammy I definitely agree with discouraging developers who should know better from making LLM-generated commits that aren't very good. But this is a separate issue from communicating with the people who are just getting excited about buildings software, so we can encourage them to do so in FOSS-friendly ways. (2/2)
For what it's worth, if your blog post had come across saying what you are *currently* saying on Fedi, I would be much more enthusiastic and appreciative of it, and I suspect others would be too. -
One of *many* arguments against: codebases substantially contributed to by LLMs will develop a tolerance for complexity that is not conducive to being maintained by anything *other* than an LLM.
@josh @silverwizard @ossguy @bkuhn @karen @kees @wwahammy You can prevent it by asking LLM tho add comments and check those comments I'm pretty sure you can make a very good PR with a LLM.
That said without bounds this will definitely not be the default and yes what you said will happen.
Although with the current rate things are going, a LLM will probably be able to rewrite a complete program source-code and re-format it in anything that is currently possible...Which is way worse for FOSS.
-
"Words that aren't there" like this?
> Historically, software freedom has has typically necessitated interacting with others
Suggesting that this is merely "historically"?
> more easily with LLM-backed generative AI coding tools (and the ease with which changes can be made generally) there is less of a natural tendency for people to work with existing FOSS communities. And we should be ok with that!
We should be okay with that? We should not treat it as an *existential threat* and respond accordingly? Those are the words that aren't there?To be clear, I am genuinely trying to understand your position because it seems distinct from the (traditional) LLM criticisms (many of which I share). But what is the existential threat? I would understand that in this context to mean a threat to the existence of FOSS. How do you see people improving their software with LLMs as a threat?
My simplified model of the situation is: a person who was previously unable to change their software now can. Then they can either:
A) never contribute it upstream
B) contribute it upstream
(BTW these are also the same 2 outcomes for people who can change their software without LLMs.)I don't see how "A" poses a threat. There is no interaction with the FOSS upstream.
I don't see how "B" poses a threat. Upstream can either ignore it (no change to FOSS) or engage with it (FOSS improved).
What threat to FOSS do you see?
-
To be clear, I am genuinely trying to understand your position because it seems distinct from the (traditional) LLM criticisms (many of which I share). But what is the existential threat? I would understand that in this context to mean a threat to the existence of FOSS. How do you see people improving their software with LLMs as a threat?
My simplified model of the situation is: a person who was previously unable to change their software now can. Then they can either:
A) never contribute it upstream
B) contribute it upstream
(BTW these are also the same 2 outcomes for people who can change their software without LLMs.)I don't see how "A" poses a threat. There is no interaction with the FOSS upstream.
I don't see how "B" poses a threat. Upstream can either ignore it (no change to FOSS) or engage with it (FOSS improved).
What threat to FOSS do you see?
Leaving aside for a moment the issue that (B) can leave maintainers drowning in slop...
There is a massive game-theoretic problem here. Employers are forcing some developers to deal with LLMs. Some people of their own volition are excited about LLMs. Some people want nothing to do with LLMs. People who heavily use and rely on LLMs have different standards for acceptable complexity and maintainability. LLMs encourage people to work more in silos without collaboration and use LLMs instead of collaborators, and that serves LLM purveyors. It's much easier to collaborate with "You're absolutely right!". Codebases and ecosystems and communities diverge.