so 3 courts + US Copyright Office say you cannot copyright nor patent anything made primarily with LLMs because automata aren't human.
-
@blogdiva Those rulings would probably only apply to the LLM generated parts; any real software product would be a mix of human-designed and AI generated parts, so it would presumably still have copyright protection. Now it is possible that a software product that is entirely "vibe coded" isn't copyrightable in the US, but currently those products suck too badly to be worth stealing.
-
@blogdiva Those rulings would probably only apply to the LLM generated parts; any real software product would be a mix of human-designed and AI generated parts, so it would presumably still have copyright protection. Now it is possible that a software product that is entirely "vibe coded" isn't copyrightable in the US, but currently those products suck too badly to be worth stealing.
-
@elduvelle I've no problem & I'm quite certain my reply was to your sophomoric response to the OP.
@DrSaucy that doesn't explain what you didn't like in my answer, but ok
-
If an AI/LLM reverse engineers the Windows codebase, and publishes the results, is this a Copyright violation?
What if Copilot does this? Is it a contract violation?
Did Copilot sign a NDA?
@SpaceLifeForm @blogdiva well since these days MS seems to be updating the Windows codebase using vibe coding then none of it is copyright anyway.
-
so 3 courts + US Copyright Office say you cannot copyright nor patent anything made primarily with LLMs because automata aren't human.
#SCOTUS won't review these rules because copyright is meant to protect human creations, not software or automata.
this may mean #AWSlop #Microslop are “de-copyrighting” & “de-patenting” their own proprietary software as they let automata “code” 🧐
❝ AI-generated art can’t be copyrighted after Supreme Court declines to review the rule
https://www.theverge.com/policy/887678/supreme-court-ai-art-copyright@blogdiva @baldur It's hard to make the distinction here
> The US federal circuit court similarly determined that AI systems can’t patent inventions because they aren’t human, which the US Patent Office reaffirmed in 2024 with new guidance, stating that while AI systems can’t be listed as inventors on a patent, people can still use AI-powered tools to develop them.
I wonder how judges are going to judge that… (I guess it's a bit the Ship of Theseus problem ?)
-
@blogdiva @baldur It's hard to make the distinction here
> The US federal circuit court similarly determined that AI systems can’t patent inventions because they aren’t human, which the US Patent Office reaffirmed in 2024 with new guidance, stating that while AI systems can’t be listed as inventors on a patent, people can still use AI-powered tools to develop them.
I wonder how judges are going to judge that… (I guess it's a bit the Ship of Theseus problem ?)
-
@javerous @blogdiva Considering the judges only come into it when there's a legal issue—something that leads to a challenge in court—they don't need to answer this question in the abstract but tackle it based on the evidence brought before them by the lawyers arguing the case.
So, things like emails, process documentation, marketing, etc. They don't need to address it as a philosophical question
-
hence the use of US, as in UNITED STATES
@blogdiva is it mansplaining or manregioning? why not both!?
-
Definitely, see my other answer here
https://neuromatch.social/@elduvelle/116161779140284723In the end I'd say the question is "who should benefit from the copyright", not whether the LLM's output is copyrightable or not, because I don't see why it wouldn't be. Obviously it's not going to be easy to figure it out, but in theory all those who contributed to the output (including in the training set) should be considered as contributors. The LLM itself, like a typewriter, is not a contributor.
@elduvelle @jaystephens
Your continuing not to see why LLM output can't be copyrightable is neither here nor there. It can't. The part written by the human is the prompt itself. You could copyright that, sure. It just isn't useful.If you could get a court to agree copyright went to all human contributors of the training data, then *nobody* could benefit from it, as nobody would have a right to make copies of it without *all* the contributors or their estates granting a license.
-
so 3 courts + US Copyright Office say you cannot copyright nor patent anything made primarily with LLMs because automata aren't human.
#SCOTUS won't review these rules because copyright is meant to protect human creations, not software or automata.
this may mean #AWSlop #Microslop are “de-copyrighting” & “de-patenting” their own proprietary software as they let automata “code” 🧐
❝ AI-generated art can’t be copyrighted after Supreme Court declines to review the rule
https://www.theverge.com/policy/887678/supreme-court-ai-art-copyright@blogdiva this is dumb in many ways. Copyright was never to protect art or artists. the purpose has always been to protect profitability not human creativity. once you do art for profit, it stops being art. The fact, that these courts fell for old capital capitalist propaganda is hilarious. -
@elduvelle @jaystephens
Your continuing not to see why LLM output can't be copyrightable is neither here nor there. It can't. The part written by the human is the prompt itself. You could copyright that, sure. It just isn't useful.If you could get a court to agree copyright went to all human contributors of the training data, then *nobody* could benefit from it, as nobody would have a right to make copies of it without *all* the contributors or their estates granting a license.
@petealexharris yeah, obviously the fact that the LLM's output comes from untraceable and sometimes stolen data is a problem.
My main point is that the SCOTUS considering that the output of an LLM is somehow the "creation" of software, instead of considering it the creation of a group of humans, is silly and wrong. It's as if they fell in the trap of considering as a separate entity as if it was some kind of actual artificial intelligence.. which it really is not.Software doesn't "create" anything, and the output of a software like photoshop is not different from the output of software like a LLM, it's still created by humans in the first place. The only difference is that we can't easily track the origin of the LLM's output.
-
@oliver_schafeld 5% actual work, 35% interoperability crap, 60% getting people to actually switch to it.
-
@petealexharris yeah, obviously the fact that the LLM's output comes from untraceable and sometimes stolen data is a problem.
My main point is that the SCOTUS considering that the output of an LLM is somehow the "creation" of software, instead of considering it the creation of a group of humans, is silly and wrong. It's as if they fell in the trap of considering as a separate entity as if it was some kind of actual artificial intelligence.. which it really is not.Software doesn't "create" anything, and the output of a software like photoshop is not different from the output of software like a LLM, it's still created by humans in the first place. The only difference is that we can't easily track the origin of the LLM's output.
@elduvelle @jaystephens
If you can't track from the creative input of the human to the output, there's no provenance to attach ownership to. If you can identify that it contains unlicensed copyrightable material then it's infringing. Obviously you can't assert copyright on someone else's work, and if it's a mix, nobody can. The courts know it's a mess, and I suspect are refusing to make it worse. -
@Viss that is EXACTLY the admission i was thinking of. also, the AWS “agentic” fiasco that deleted a whole server farm, or whatever it was? yah. should be interesting.
-
J jwcph@helvede.net shared this topic