Free software people: A major goal of free software is for individuals to be able to cause software to behave in the way they want it toLLMs: (enable that)Free software people: Oh no not like that
-
Free software people: A major goal of free software is for individuals to be able to cause software to behave in the way they want it to
LLMs: (enable that)
Free software people: Oh no not like that@mjg59 you mean "not by paying monthly $200 to a wanna be megacorp"? Yeah, not like that indeed.
13 years old me started coding on an old Windows 3.1 workstation with ~$0 monthly cost. If I were to enter the industry now, when one has to invest in LLMs, which btw also prevent from gaining actual skills and erode existing skills, I would simply have not done that. Must be why genZ hates LLMs
I don't see how one can look at the thought-extruding machine and think "surely it will liberate me"
-
Clearly my most unpopular thread ever, so let me add a clarification: submitting LLM generated code you don't understand to an upstream project is absolute bullshit and you should never do that. Having an LLM turn an existing codebase into something that meets your local needs? Do it. The code may be awful, it may break stuff you don't care about, and that's what all my early patches to free software looked like. It's ok to solve your problem locally.
@mjg59 pretty much. I deeply dislike any PRs I see on various projects where the prompt was basically just something like "I want you to implement this major feature into this project", with no real understanding of the underlying code and whatnot.
I would rather have coders that know what they're doing and that understand their codebases use LLMs than a random Joe Schmoe like those TikTok vibecoders with like 5 monitor screens, brainrotted on short-form content asking Claude to add E2EE to some project or to refactor the rendering process of a game engine or whatnot.
These people are wasting the maintainers' time with a jumbled mess of AI code that assumes a few things and that likely breaks on the first try.
---
There's nothing wrong with pulling a git repo and then vibe-coding a quick thing as a test or for your specific use case, but there's everything wrong with upstreaming that as a PR if you have no idea how the project's code even works or how it's architected, and with no tests or checks.
-
Personally I'm not going to literally copy code from a codebase under an incompatible license because that is what the law says, but have I read proprietary code and learned the underlying creative aspect and then written new code that embodies it? Yes! Anyone claiming otherwise is lying!
@mjg59 This might be the dumbest thing you have written. You basically just said anyone who claims not to have committed copyright infringement is lying, which is both obviously false and insulting to developers.
-
When I write code I am turning a creative idea into a mechanical embodiment of that idea. I am not creating beauty. Every line of code I write is a copy of another line of code I've read somewhere before, lightly modified to meet my needs. My code is not intended to evoke emotion. It does not change people think about the world. The idea→code pipeline in my head is not obviously distinguishable from the prompt->code process in an LLM
@mjg59 " Every line of code I write is a copy of another line of code I've read somewhere before." This cannot possibly be true. Surely you've written some original content, as a developer, which was unique or which created your own function, or did something you hadn't simply read before?
Even if it is somehow true for you, it is not at all how most developers write code.
-
Free software people: A major goal of free software is for individuals to be able to cause software to behave in the way they want it to
LLMs: (enable that)
Free software people: Oh no not like that@mjg59 Most of the discourse just shows why "the Linux community" is considered this elitist toxic cesspit by most non linux people people
And it's wild, because many that consider them the good folks in this regard are also participating in this toxicity
it's like being condescending and shaming others for their poor choices is seen as the normal thing to do
-
When I write code I am turning a creative idea into a mechanical embodiment of that idea. I am not creating beauty. Every line of code I write is a copy of another line of code I've read somewhere before, lightly modified to meet my needs. My code is not intended to evoke emotion. It does not change people think about the world. The idea→code pipeline in my head is not obviously distinguishable from the prompt->code process in an LLM
@mjg59 this may be true for code I don't care about or need to deliver quickly, everything else definitely contains as much beauty as I am capable of
-
@mnl @david_chisnall @mjg59 @ignaloidas
even reading the first page.
Generally, this assessment of the overall book extends to each page, even if it contains pages with errors.
For llms, there is a probability that each query is resulting in garbage. In the book-analogy, it is as if each page is written by a different author, some experts, some crooks
Except no page is attributed, and guessing who wrote what page is up to the reader.
There is no model to be build around that fail-mode
2/2@newhinton @david_chisnall @mjg59 @ignaloidas I’m not really following. using an llm doesn’t erase my brain the minute I use it, nor are is it a random number generator where you are forbidden to check the answers? These all hold for llms.
-
Personally I'm not going to literally copy code from a codebase under an incompatible license because that is what the law says, but have I read proprietary code and learned the underlying creative aspect and then written new code that embodies it? Yes! Anyone claiming otherwise is lying!
@mjg59 "i don't like programming and anyone who does is a liar" is a hill to die on, i guess
-
@david_chisnall @mjg59 I suspect CHERI would make running LLM-generated code more feasible, and probably less risky. I'm not saying this to be an annoying contrarian, but rather that stronger underlying models seems to make playing with garbage LLM code more viable. Terry Tao has been using them to generate quick and dirty proofs, cha bu duo.
It certainly can. As long as you are careful about the interfaces to the compartment, you can reason about the worst that can happen with the LLM-generated code. I see this as a special case of supply-chain attacks, which the CHERIoT compartmentalisation mode was designed to protect against: assume this code works for your test vectors and might be actively malicious in other cases, what's the worst that can happen? LLM's just let you bring the supply-chain attacks in house.
-
Free software people: A major goal of free software is for individuals to be able to cause software to behave in the way they want it to
LLMs: (enable that)
Free software people: Oh no not like that@mjg59 my 2 favourite single user LLM use cases are
for people who are physically immobile, to help them interact with others. Seeing how these tools can make them more able to engage with the world is heartening.
The other is my non tech musician friend who made a simple web page that ensures he plays all his tunes regularly but in random rotation. It hooks into google sheets and he slopped it all up by himself.
-
@david_chisnall @mjg59 @ignaloidas just like humans! Or books!
@mnl @david_chisnall @mjg59 @ignaloidas you don't pick humans nor books, randomly.
-
@zacchiro I understood the ask I replied to was regarding ethical training. Mistral, as an EU company, has to abide by EU regulations AI companies in the US, China etc don't have to.
@troed I see. I don't know either what @chris_evelyn had in mind, so I'll leave it to them. But for what is worth the EU AI Act equally applies to all companies having access to the EU market. Mistral is not be special in that respect, unless the other players decide to leave the EU market (which is unlikely). @mjg59
-
@mnl @david_chisnall @mjg59 @ignaloidas you don't pick humans nor books, randomly.
@ced @david_chisnall @mjg59 @ignaloidas neither does an llm? We are perfectly able to deal with, say, search engine results, which are arguably more problematic than llms. For all intents and purposes, the books and resources I have at my disposal are also the product of random processes. I can still work with them to learn things.
-
When I write code I am turning a creative idea into a mechanical embodiment of that idea. I am not creating beauty. Every line of code I write is a copy of another line of code I've read somewhere before, lightly modified to meet my needs. My code is not intended to evoke emotion. It does not change people think about the world. The idea→code pipeline in my head is not obviously distinguishable from the prompt->code process in an LLM
This is such a bullshit, deprecating framing of what developers do. The fact that you also deprecate yourself doesn't make it any better.
Sure, the individual "line of code" may not be very unique. But the arrangement of many lines is. Your comparison is about equivalent to saying "hah, how can an author produce anything novel if he's just using the same old words from the English alphabet!"
-
Free software people: A major goal of free software is for individuals to be able to cause software to behave in the way they want it to
LLMs: (enable that)
Free software people: Oh no not like that@mjg59@nondeterministic.computer If you want to use LLMs to make a software what you want, feel free to do it in a private forks. Private forks for yourself are fine. Private is private.
But its also the freedom of the developer/maintainer of the software to not allow such changes upstream or force such changes to be marked. -
Free software people: A major goal of free software is for individuals to be able to cause software to behave in the way they want it to
LLMs: (enable that)
Free software people: Oh no not like that@mjg59 I have some issues about using LLM, but the only one in the free software world is about license tainting: I’m not sure if the code generated by a LLM is public domain.
-
@ced @david_chisnall @mjg59 @ignaloidas neither does an llm? We are perfectly able to deal with, say, search engine results, which are arguably more problematic than llms. For all intents and purposes, the books and resources I have at my disposal are also the product of random processes. I can still work with them to learn things.
@mnl @david_chisnall @mjg59 @ignaloidas well great for you. *I*'m not able to deal with random search results (especially now that they are often slop). And if your books were bought randomly, sure. Mine were selected because I trust the author, or because I know enough about the author bias to be able to correct it.
-
@mnl @david_chisnall @mjg59 @ignaloidas well great for you. *I*'m not able to deal with random search results (especially now that they are often slop). And if your books were bought randomly, sure. Mine were selected because I trust the author, or because I know enough about the author bias to be able to correct it.
@ced @david_chisnall @mjg59 @ignaloidas do you not use a search engine (genuinely curious, I love building search engines and making them work well)?
Do you think it’s impossible to assign varying degrees of trust to llm output?
-
Look, coders, we are not writers. There's no way to turn "increment this variable" into life changing prose. The creativity exists outside the code. It always has done and it always will do. Let it go.
@mjg59 I think this understanding of art stems from a misunderstanding what art in itself is.
Like of course writing code can be an artistic activity and trying to argue against is just shows a deep misunderstanding of those who see it that way.
But "arts goal" isn't even to be life changing prose, most arts goal isn't even that at all. Most "classical" art was even seen as "just a craft".
"beauty" can manifest in many ways, and self-expression through code is a thing.
-
@newhinton @david_chisnall @mjg59 @ignaloidas I’m not really following. using an llm doesn’t erase my brain the minute I use it, nor are is it a random number generator where you are forbidden to check the answers? These all hold for llms.
@mnl@hachyderm.io @newhinton@troet.cafe @david_chisnall@infosec.exchange @mjg59@nondeterministic.computer the difference is that you can gain trust that some author knows his stuff in a specific field and you no longer need to cross-check every single thing that they write.
With an LLM no such trust can be developed, because fundamentally it's just rolling dice out of a modeled distribution, the fact that the LLM was right about something 9 previous times has no influence whether the next statement will be correct or wrong.
It's these trust relationships that allow to work efficiently - cross checking everything every time is incredibly time consuming.