👀 … https://sfconservancy.org/blog/2026/apr/15/eternal-november-generative-ai-llm/ …my colleague Denver Gingerich writes: newcomers' extensive reliance on LLM-backed generative AI is comparable to the Eternal September onslaught to USENET in 1993.
-
-
@evan wrote:
> “I consider myself an expert on this process since I learned about it 45 minutes ago ”
This is the second time you've made me
in this thread. Thanks for being comic relief (and I know that's not *all* you're doing, but that part is particularly helpful). Thank you!@bkuhn @richardfontana @cwebber @ossguy @karen thanks! I hope I wasn't too flip.
-
@bkuhn @ossguy @richardfontana So let me summarize:
- Without knowing the legal status of accepting LLM contributions, we're potentially polluting our codebases with stuff that we are going to have a HELL of a time cleaning up later
- The idea of a copyleft-only LLM is a joke and we should not rely on it
- We really only have two realistic scenarios: either FOSS projects cannot accept LLM based contributions legally from an international perspective, or everything is effectively in the public domain as outputted from these machines, but at least in the latter scenario we get to weaken copyright for everyone.That's leaving out a lot of other considerations about LLMs and the ethics of using them, which I think most of the other replies were focused on, I largely focused on the copyright implications aspects in this subthread. Because yes, I agree, it can be important to focus a conversation.
But we can't ignore this right now.
We're putting FOSS codebases at risk.
@cwebber @bkuhn @ossguy @richardfontana FWIW, I'd be delighted to read this as a blog post.
I'm still baffled that chardet just sidestepped this via 0BSD, sort of.
A thought that recently struck me that, if code is essentially impossible to license now, will we see a resurgence in other forms of IP, like ... software patents?
Those *would* be defensible post-laundering ...
-
@bkuhn @richardfontana @cwebber @ossguy @karen sadly no!
I really don't like having anyone, including AI systems, write for me under my own name. Not least because I don't like the style and tone of ChatGPT and friends. They just write very blandly.
-
*Thaler is limited to DC Circuit & very narrow. It's a registration question, & even *its* dicta hints there is no way we can know the answer on (1).
I think (2) is a strong argument.
As for (3), there is huge value to be extracted by applying copyleft-ish principles (and copyleft licenses themselves) to LLM-backed genAI output.
In worse case: a big complex mix of public domain + copylefted-human-authored stuff can't easily be separated.
@bkuhn @richardfontana @evan @cwebber @ossguy Wow I really appreciate you weighing in here! I was thinking Naruto v. Slater for point one not just Thaler but I certainly defer to your expertise especially on point 3.
-
@richardfontana wrote:
> “oh I mean of course you could use LLMs to help with the analysis ”I'm catching up backwards on this thread, but do you see now the monster you created by telling @evan that?
@bkuhn @richardfontana @cwebber @ossguy @karen hahahaha sorry!
It wasn't till I had gone through the exercise that I realized I was doing work in a similar vein that you'd already committed to do. I hope it wasn't too monstrous.
-
@evan @bkuhn @ossguy @richardfontana Say for a moment that we *did* make a model which intentionally pulled in leaked source code from various proprietary codebases.
What would your opinion be on the legal-hazard state of accepting that code output? Would you consider it relatively safe from a copyright perspective?
Wow, 2ⁿᵈ time in 2 days that I can work in quotes from ST:TNG,“Unification” (S05E07-8)!
To quote the Ferengi, Omag¹:
> Omag: “Hypothetically speaking?”
> Riker: “Yes.”
> Omag: “I never learned to speak hypothetical.”IOW, E_TOO_MANY_NON_HYPOTHETICAL_PROBLEMS_WITH_AI
¹ I had to look up Omag's name — my ST:TNG knowledge is not *that* encyclopedic. But see image: Google's G-E-H-munyae can't tell Klingons from Ferengi.
-
@evan @richardfontana I am saying we don't know the answer to that question, and it seems that @bkuhn and @ossguy agree that we don't know the answer to it, based on previous posts, and the lack of knowledge about what the copyright implications of LLM based contributions means that we are creating a schrodingers-licensing-timebomb for our FOSS codebases
I don't see a plausible path where the timebomb exists: (a) likely none of these proprietary LLM-backed genAI systems are *trained* on proprietary software, & (b) even if they *are*, the proprietary industry as a whole seems very much to *want* to maintain this absolute fiction that these systems are magically always public domain, & if not fair use defense always works.
We meanwhile use copyleft-ish strategies to beat them at their own game.
-
A case from 2022 still not a trial in 2026 doesn't indicate unreasonable or manipulative delay by Defendants. Such cases really do take that long.
Also, Doe vs. Microsoft's Github is a terribly constructed case and actually pushes us toward compulsory licensing of #FOSS works for #LLM-backed gen-#AI training— since the Plaintiff's lawyers in that case are clearly chasing their own avarice, not software freedom.
Background:
https://sfconservancy.org/news/2022/nov/04/class-action-lawsuit-filing-copilot/
@cwebber @ossguy @richardfontana@bkuhn
I had browsed the docket, but you are right that it is not for me to say whether motions are a delay, and plaintiffs also do not seem to be in a rush (e.g., joint motion to postpone deadlines). The point is that we don't know how such litigation will play out, especially in light of the volatility of public sentiment about this industry.Has anyone written an analysis of how their case pushes toward compulsory licensing?
If LLM outputs routinely constitute derivative works, then it is impossible to comply with licenses (even permissive ones) without acknowledging all such training data and/or constant open-ended research quests as due diligence that each response does not infringe an unknown corpus. The companies don't want to disclose their corpus because their business relies on not acknowledging the derivative relation.
-
@cwebber @bkuhn @ossguy @richardfontana
Based on my following of current legal cases, I think it's entirely possible that in a year or two we'll suddenly be rolling large OSS codebases back to 2023. And won't that be fun!
Can you please cite the actual precedent?
If it's ongoing, yet-undecided cases you mean, which of the 100s of cases do you mean, what rulings have occurred that lead you to this speculation, and why?
I know you didn't mean to, but your post just feeds the FUD monsters.
-
@cwebber @bkuhn @ossguy @richardfontana Worse IMHO is that we're putting FOSS as a movement at risk if we deskill everyone to the point where you either pay money to have code generated for you, or there is no code.
@jens wrote:
> “we're putting FOSS as a movement at risk if we deskill everyone to the point where you either pay money to have code generated for you, or there is no code.”I agree completely. We *need* to encourage extreme discipline if LLM-backed genAI systems are used for software to ensure: (a) experienced developers' skills don't atrophy, and (b) ensure that new developers understand these tools aren't for neophytes b/c they newbies far astray.
-
I don't see a plausible path where the timebomb exists: (a) likely none of these proprietary LLM-backed genAI systems are *trained* on proprietary software, & (b) even if they *are*, the proprietary industry as a whole seems very much to *want* to maintain this absolute fiction that these systems are magically always public domain, & if not fair use defense always works.
We meanwhile use copyleft-ish strategies to beat them at their own game.
@bkuhn @cwebber @evan @richardfontana @zacchiro Even if attribution issues disappear. Surely it's a time bomb in terms of projects who are intentionally not using copyleft licenses. Or incompatible licenses?
-
@bkuhn @richardfontana @evan @cwebber @ossguy Wow I really appreciate you weighing in here! I was thinking Naruto v. Slater for point one not just Thaler but I certainly defer to your expertise especially on point 3.
@bkuhn @richardfontana @evan @cwebber @ossguy Interesting! “Copyright law — and the legal precedents around it — differ widely for different types of creative works. Analysis of the copyrightability of works of software varies in notable ways. Therefore, do not to assume that analysis for images apply broadly to software.”
-
A WWII reference is never helpful in a discussion unless the topic *is actually* WWII.
I'd be glad to have a serious discussion with you, but if you follow Godwin's law again, I probably will block you.
I know emotions are frayed and the FOSS community is frightened and worried, so I forgive you. But there is no reason to claim the situation with LLM-backed AI is tantamount to Hitler's violent invasion of Europe.
-
@trwnh @bkuhn @ossguy @richardfontana Plenty of Microsoft code has been released under "shared source" licenses and also leaks
@cwebber wrote:
> “ Plenty of Microsoft code has been released [publicly] under "shared source" licenses [and may well be in training sets]”
An interesting point! We should call on Microsoft to agree: if you end up with copyright violations of their source-available FOSS licenses, that they will offer a unilateral covenant-not-to-sue. They already indemnify Copilot users, but it's good advocacy tactics to point out they didn't indemnify Claude users for same.
Cc: @trwnh @richardfontana @ossguy -
@bkuhn @cwebber @evan @richardfontana @zacchiro Even if attribution issues disappear. Surely it's a time bomb in terms of projects who are intentionally not using copyleft licenses. Or incompatible licenses?
@bkuhn @cwebber @evan @richardfontana @zacchiro OpenZFS is accepting LLM assisted patches today. If a CDDL project like it received non trivial near verbatim GPL code from another project via a LLM I would think that would be an issue. Or vice versa, CDDL code in a GPL project.
-
I saw this comment after I saw you elsewhere in the thread comparing the LLM-backed genAI situation to WWII, so I am have a lot of trouble taking this seriously.
Plus your comment is snarky, sarcastic, mean, and slightly ad hominem. There is no reason for all that in civil debate.
@bkuhn @wwahammy @silverwizard @cwebber @richardfontana you seem to be caught on prissy rules of politeness rather than being able to see the meat of my argument.
If you want to give LLM advocates every accommodation, but are unwilling to do that for other people, then you are making clear your preferences on the subject in a way that shows you were never trying to understand both sides, you were never willing to move your position, and you are just pushing an agenda. All your talk of debates and understanding both sides rings hollow if you can only accept discussion in one particular format.
Hypocrite.
If I'm wrong, prove me wrong.
You're not going to.
-
@bkuhn @ossguy @LordCaramac @richardfontana
- There are plenty of FOSS projects we care about which are not under copyleft. What terms should they consider received code under? Should SDL now consider all LLM based output under the GPL? The AGPL? Which? Do you expect such a project to switch its license to copyleft now?
- Microsoft's proprietary code may not be, but plenty of proprietary code is available under extremely non-FOSS and restrictive licenses which are within datasets we are getting contributions from *today*
- The mutually assured destruction "safe option" isn't that things are under copyleft for proprietary companies though, that's still a losing scenario for them. So that doesn't help the case for copyleft, only accepting that LLM output under the public domain is (which we don't know)If there was ever a time in 40+ years of #FOSS history to tell our #copyleft-hating FOSS friends that they erred in their license choices, now is the time.
If they don't switch, they're giving hand-outs to the proprietary software companies. Now, in an entirely new & #disturbing way.
I really think these cases where proprietary software ends up in #LLM training sets & actually creates risk are exceedingly rare, if not entirely hypothetical.
-
@cwebber @LordCaramac @bkuhn @richardfontana Sadly it will be years before we have an answer re copyright and we can't wait for that. Outlining usage in the meantime is the best we can do, in case we need to do something with that later.
We know proprietary software companies are using these tools extensively, so this is in effect a mutually assured destruction situation. While we wait, we should make sure that we are pushing freedom on all other axes, since they won't do that part.
@ossguy @cwebber @LordCaramac @bkuhn @richardfontana This sounds to me like the proprietary software companies are using code LLMs to mass copyright launder copyleft, GPL etc. code so as to basically incorporate it all without honouring its code (re)distribution terms. This is a grimmer situation than I had anticipated, and I'm not widely known as an optimist.
-
@cwebber @bkuhn @ossguy @richardfontana
Indeed, big tech know full well the FLOSS / indie creators don't have the legal funds to defend. Their IP either.
I've spent my whole career building organizations that could, in fact, defend against Big Tech when they trample on the FOSS community's rights.
While it's difficult work to fund, SFC has done it on a shoestring budget for two decades now, and we've yielded results that include both copyright and contract forms of litigation.
Big Tech *is* afraid of us. The mouse can roar.
See also: https://sfc.ngo/vizio/