Free software people: A major goal of free software is for individuals to be able to cause software to behave in the way they want it toLLMs: (enable that)Free software people: Oh no not like that
-
Personally I'm not going to literally copy code from a codebase under an incompatible license because that is what the law says, but have I read proprietary code and learned the underlying creative aspect and then written new code that embodies it? Yes! Anyone claiming otherwise is lying!
@mjg59 hey, I'm all for laundering IP, I just need to make sure it launders propiretary IP as well as open-source!
Faceless corps: NO NOT LIKE THAT!!
-
Clearly my most unpopular thread ever, so let me add a clarification: submitting LLM generated code you don't understand to an upstream project is absolute bullshit and you should never do that. Having an LLM turn an existing codebase into something that meets your local needs? Do it. The code may be awful, it may break stuff you don't care about, and that's what all my early patches to free software looked like. It's ok to solve your problem locally.
@mjg59 strictly local needs, you do you.
If using a giant model like Claude, you might want to consider what remodelling that code will cost the planet in terms of direct carbon output, electricity generation, water pollution, amortised environmental cost of building the Pollution Centres and the ongoing damage to local communities of the Pollution Centres.
If you can live with all that? Sure, use your magic auto complete. Just don't expect others to not judge you for it. Not saying I would, btw, but that's the argument .
-
Clearly my most unpopular thread ever, so let me add a clarification: submitting LLM generated code you don't understand to an upstream project is absolute bullshit and you should never do that. Having an LLM turn an existing codebase into something that meets your local needs? Do it. The code may be awful, it may break stuff you don't care about, and that's what all my early patches to free software looked like. It's ok to solve your problem locally.
@mjg59 I think the negativity comes from the fact that a lot of floss developers have other reasons why they work on projects besides scratching their own itch - "meeting the local needs" as you put it.
That is expanding their knowledge and, sometimes even the enjoyment of the programming act itself.
So if you treat open source development as a learning experience and an artistic expression, you're automatically going to balk at something that would take that away. -
Clearly my most unpopular thread ever, so let me add a clarification: submitting LLM generated code you don't understand to an upstream project is absolute bullshit and you should never do that. Having an LLM turn an existing codebase into something that meets your local needs? Do it. The code may be awful, it may break stuff you don't care about, and that's what all my early patches to free software looked like. It's ok to solve your problem locally.
@mjg59 i think the submitting it back is the part people are angry about not that it is possible
-
Free software people: A major goal of free software is for individuals to be able to cause software to behave in the way they want it to
LLMs: (enable that)
Free software people: Oh no not like that@mjg59@nondeterministic.computer my problem with this argument is that LLMs aren't good at modifying the software, nor are they good at creating software that's easily modifiable.
Also, I'd note that it's less free software people, and people who are interested in quality software, and it's that interest that has driven them to free software, because most free software is too high of quality for most companies to make/buy from an economical standpoint. -
Clearly my most unpopular thread ever, so let me add a clarification: submitting LLM generated code you don't understand to an upstream project is absolute bullshit and you should never do that. Having an LLM turn an existing codebase into something that meets your local needs? Do it. The code may be awful, it may break stuff you don't care about, and that's what all my early patches to free software looked like. It's ok to solve your problem locally.
@mjg59 The LLM-hate reminds me of the backlash against computers themselves. People insisted they were 100% worthless because someone got a bill for $0, and then a notice they were in arrears when it was not paid. Many projects either failed outright or people had to do their work twice, first the old pen and paper way which worked, and then also put it into the computer never to be seen again...
-
Free software people: A major goal of free software is for individuals to be able to cause software to behave in the way they want it to
LLMs: (enable that)
Free software people: Oh no not like thatI’ve heard this argument before and I disagree with it. My goal for Free Software is to enable users, but that requires users have agency. Users being able to modify code to do what they want? Great! Users being given a black box that will modify their code in a way that might do what they want but will fail in unpredictable ways, without giving them any mechanism to build a mental model of those failure modes? Terrible!
I am not a carpenter but I have an electric screwdriver. It’s great. It lets me turn screws with much less effort than a manual one. There are a bunch of places where it doesn’t work, but that’s fine, I can understand those and use the harder-to-use tool in places where it won’t work. I can build a mental model of when not to use it and why it doesn’t work and how it will fail. I love building the software equivalent of this, things that let end users change code in ways I didn’t anticipate.
But LLM coding is not like this. It’s like a nail gun that has a 1% chance of firing backwards. 99% of the time, it’s much easier than using a hammer. 1% of the time you lose an eye. And you have no way of knowing which it will be. The same prompt, given to the same model, two days in a row, may give you a program that does what you want one time and a program that looks like it does what you want but silently corrupts your data the next time.
That’s not empowering users, that’s removing agency from users. Tools that empower users are ones that make it easy for users to build a (nicely abstracted, ignoring details that are irrelevant to them) mental model of how the system works and therefor the ability to change it in precise ways. Tools that remove agency from users take their ability to reason about how systems work and how to effect precise change.
I have zero interest in enabling tools that remove agency from users.
-
@jenesuispasgoth @mjg59 This is not AI endorsement, but given a sufficiently large problem / codebase, I would wager you wouldn't get a reliably identical result from having a human write code for the same problem twice either.
We expect determinism from LLMs because "its computers", not because its necessary for good results. -
@mjg59 strictly local needs, you do you.
If using a giant model like Claude, you might want to consider what remodelling that code will cost the planet in terms of direct carbon output, electricity generation, water pollution, amortised environmental cost of building the Pollution Centres and the ongoing damage to local communities of the Pollution Centres.
If you can live with all that? Sure, use your magic auto complete. Just don't expect others to not judge you for it. Not saying I would, btw, but that's the argument .
-
Look, coders, we are not writers. There's no way to turn "increment this variable" into life changing prose. The creativity exists outside the code. It always has done and it always will do. Let it go.
They do speak of 'elegance' even 'beauty' when it comes to mathematical proofs.
Aesthetics are not a positivist axiology. Beauty is famously in the eye of the beholder.
Just because you are aware you write ugly code doesn't mean code cannot be beautiful, or that aesthetics are not a legitimate field of assessing information systems.
-
@dekkzz78 I am absolutely not going to argue that LLMs replace the need for skilled developers! But many people who want to modify software are just doing it for personal use and if we argue using LLMs for that is unethical we risk alienating them all
-
@barubary given my history, if your immediate conclusion is that I'm not presenting an honest opinion then I think you have a fundamental misunderstanding of who I am
@mjg59 No, I do think you're being honest, I just think your opinion is kinda bad.
-
When I write code I am turning a creative idea into a mechanical embodiment of that idea. I am not creating beauty. Every line of code I write is a copy of another line of code I've read somewhere before, lightly modified to meet my needs. My code is not intended to evoke emotion. It does not change people think about the world. The idea→code pipeline in my head is not obviously distinguishable from the prompt->code process in an LLM
@mjg59 skill issue tbh
i barely write code for other reasons
-
I’ve heard this argument before and I disagree with it. My goal for Free Software is to enable users, but that requires users have agency. Users being able to modify code to do what they want? Great! Users being given a black box that will modify their code in a way that might do what they want but will fail in unpredictable ways, without giving them any mechanism to build a mental model of those failure modes? Terrible!
I am not a carpenter but I have an electric screwdriver. It’s great. It lets me turn screws with much less effort than a manual one. There are a bunch of places where it doesn’t work, but that’s fine, I can understand those and use the harder-to-use tool in places where it won’t work. I can build a mental model of when not to use it and why it doesn’t work and how it will fail. I love building the software equivalent of this, things that let end users change code in ways I didn’t anticipate.
But LLM coding is not like this. It’s like a nail gun that has a 1% chance of firing backwards. 99% of the time, it’s much easier than using a hammer. 1% of the time you lose an eye. And you have no way of knowing which it will be. The same prompt, given to the same model, two days in a row, may give you a program that does what you want one time and a program that looks like it does what you want but silently corrupts your data the next time.
That’s not empowering users, that’s removing agency from users. Tools that empower users are ones that make it easy for users to build a (nicely abstracted, ignoring details that are irrelevant to them) mental model of how the system works and therefor the ability to change it in precise ways. Tools that remove agency from users take their ability to reason about how systems work and how to effect precise change.
I have zero interest in enabling tools that remove agency from users.
@david_chisnall @mjg59 @ignaloidas llms can be used to explain and learn things. Unsurprisingly, that’s what many people do when things don’t work, be they written by a human or not, and they want them to work.
-
@mjg59 What you propose is actually illegal, even if the law doesn’t make much sense. I wonder if you ever had the cops sent after you on a corp-run IP case… maybe it would make you feel different?
@promovicz
That completely oversimplifies what's being discussed here. Every math book you ever studied is copyright, that does not mean you cannot use what you learned to solve math problems. -
Clearly my most unpopular thread ever, so let me add a clarification: submitting LLM generated code you don't understand to an upstream project is absolute bullshit and you should never do that. Having an LLM turn an existing codebase into something that meets your local needs? Do it. The code may be awful, it may break stuff you don't care about, and that's what all my early patches to free software looked like. It's ok to solve your problem locally.
@mjg59 heh, one of the new ideas in a project I'm doing virtualization work for is to have a fully local LLM generate bespoke apps and instantly summon them directly on the desktop.
I don't think current local LLMs are actually "ethical" either, all my "fuck that entire industry" concerns are always present, and personally I wouldn't like using straight up fuzzy statistically magically inferred apps at all. But I do like the idea of empowering people to locally just do bespoke things like that, as long as there's always a big disclaimer about it being made that way and so on.
-
@barnoid Huh interesting, that's really not my experience of writing code - I sit down with a formed idea of what needs to happen and then I smash keys until it's there. And now I'm curious whether there's a real disconnect between with different models of coding.
@mjg59 You never realise the original idea could be improved a bit along the way? This probably depends on what's being worked on. Most of the stuff I do is fairly small scale and not particularly well specified (day job is mostly sysadmin, off day jobs are museum installations).
-
@david_chisnall @mjg59 @ignaloidas llms can be used to explain and learn things. Unsurprisingly, that’s what many people do when things don’t work, be they written by a human or not, and they want them to work.
And they will give entirely plausible explanations. Occasionally, by coincidence, they will be correct.
-
And they will give entirely plausible explanations. Occasionally, by coincidence, they will be correct.
@david_chisnall @mjg59 @ignaloidas just like humans! Or books!
-
@david_chisnall @mjg59 @ignaloidas just like humans! Or books!
Not even close. Humans build mental models of things and, if correct in one area, are likely to be correct in adjacent ones. And, in most cases, are able to say ‘I don’t know” when they don’t know the answer. Books (at least, those from reputable publishers) are reviewed by technical reviewers who spot factual errors, and have finite contents and so will simply not contain an answer if it is not something the author thought to write.
LLMs will interpolate over an n-dimensional latent space to provide a convincing answer. That answer may, if those bits of the latent space were well populated by things in the training set, be correct. But there is no difference from an LLM’s perspective between a correct and incorrect answer, only a likely and unlikely one.