Free software people: A major goal of free software is for individuals to be able to cause software to behave in the way they want it toLLMs: (enable that)Free software people: Oh no not like that
-
@mnl@hachyderm.io @mjg59@nondeterministic.computer @david_chisnall@infosec.exchange @newhinton@troet.cafe through repeated checks and knowledge that humans are consistent.
And like, really, you don't trust your code at all? I, for example, know that the code I wrote is not going to cheat by unit tests, not going to re-implement half of the things from scratch when I'm working on a small feature, nor will it randomly delete files. After working with people for a while, I can be fairly sure that the code they've written can be trusted to the same standards. LLMs can't be trusted with these things, and in fact have been documented to do all of these things.
It is not a blind, absolute trust, but trust within reason. The fact that I have to explain this to you is honestly embarrassing.@ignaloidas @mjg59 @david_chisnall @newhinton but “fairly sure” is not full trust. I can also be “fairly sure” that something works, but I’m not going to trust my judgment and instead will try to validate it and provide proper guardrails so that if it is misbehaving, it is at least contained. Some things will be just fine even if broken, some less and will make me invest me more of my time. I am not going to try to prove the kernel correct just because I am changing a css color. I don’t see how that is different with llms, and I use them everyday. If anything, they allow me to validate more.
-
@mjg59 This doesn't feel right to me. IMO few people actually object to use of LLMs by individuals for tinkering on personal stuff.
The criticism as I see it is primarily that:
1) there are huge societal/political impacts - uncompensated use of copyrighted material; benefits of it accruing primarily to a few big players; energy use; layoffs; perceived misallocation of massive amounts of capital
2) the output quality of LLMs is t r a s h, unsuitable for professional use -
@ignaloidas @mjg59 @david_chisnall @newhinton but “fairly sure” is not full trust. I can also be “fairly sure” that something works, but I’m not going to trust my judgment and instead will try to validate it and provide proper guardrails so that if it is misbehaving, it is at least contained. Some things will be just fine even if broken, some less and will make me invest me more of my time. I am not going to try to prove the kernel correct just because I am changing a css color. I don’t see how that is different with llms, and I use them everyday. If anything, they allow me to validate more.
@mnl@hachyderm.io @mjg59@nondeterministic.computer @david_chisnall@infosec.exchange @newhinton@troet.cafe you are falling down the cryptocurrency fallacy, assuming that you cannot trust anyone and as such have to build stuff assuming everyone is looking to get one over you.
This is tiresome, and I do not care to discuss with you on this any longer, if you cannot understand that there are levels between "no trust" and "absolute trust", there is nothing more to discuss. -
@mnl@hachyderm.io @engideer@tech.lgbt @david_chisnall@infosec.exchange @mjg59@nondeterministic.computer LLMs are very much a random number generators. The distribution is far, far from uniform, but the whole breakthrough of LLMs was the introduction of "temperature", quite literally random choices, to break them out of monotonous tendencies.
@ignaloidas @mjg59 @david_chisnall @engideer temperature based sampling is just one of the many sampling modalities. Nucleus sampling, top-k, frequency penalties, all of these introduce controlled randomness to improve the performance of llms as measured by a wide variety of benchmarks.
A random sampling of tokens would actually be uniformly distributed… and obviously grammatically correct sentences is a clear sign that we are not randomly sampling tokens.
Are we talking about the same thing?
-
-
@mnl@hachyderm.io @mjg59@nondeterministic.computer @david_chisnall@infosec.exchange @newhinton@troet.cafe you are falling down the cryptocurrency fallacy, assuming that you cannot trust anyone and as such have to build stuff assuming everyone is looking to get one over you.
This is tiresome, and I do not care to discuss with you on this any longer, if you cannot understand that there are levels between "no trust" and "absolute trust", there is nothing more to discuss.@ignaloidas @mjg59 @david_chisnall @newhinton I think you are misreading what I am saying. That is exactly what I am saying. I never fully trust my code, not a single line of it, partly because every line of my code usually requires billions of lines of code I haven’t written to run. I can apply methods and use my experience to trust it enough to run it.
-
@ignaloidas @mjg59 @david_chisnall @engideer temperature based sampling is just one of the many sampling modalities. Nucleus sampling, top-k, frequency penalties, all of these introduce controlled randomness to improve the performance of llms as measured by a wide variety of benchmarks.
A random sampling of tokens would actually be uniformly distributed… and obviously grammatically correct sentences is a clear sign that we are not randomly sampling tokens.
Are we talking about the same thing?
@mnl@hachyderm.io @mjg59@nondeterministic.computer @david_chisnall@infosec.exchange @engideer@tech.lgbt the fact that something is random does not mean that it has a uniform distribution. "controlled randomness" is still randomness. Taking random points in a unit circle by taking two random numbers for distance and direction will not result in a uniform distribution, but it's still random.
like, do you even read what you're writing? I'm starting to understand why you don't trust the code you wrote -
@mnl@hachyderm.io @mjg59@nondeterministic.computer @david_chisnall@infosec.exchange @engideer@tech.lgbt the fact that something is random does not mean that it has a uniform distribution. "controlled randomness" is still randomness. Taking random points in a unit circle by taking two random numbers for distance and direction will not result in a uniform distribution, but it's still random.
like, do you even read what you're writing? I'm starting to understand why you don't trust the code you wrote@ignaloidas @mjg59 @david_chisnall @engideer now you are talking about absolute trust. I do think we are indeed talking about different things. Do you use LLMs? Do you assign the same level of trust to qwen-3.6 than to gpt-2? because I do not, partly based on benchmarks, partly on personal experience, partly on my (admittedly perfunctory) theoretical understanding of its training and inference setup.
-
Look, coders, we are not writers. There's no way to turn "increment this variable" into life changing prose. The creativity exists outside the code. It always has done and it always will do. Let it go.
@mjg59 Indeed.
This is why code generation is not a solution to the problem.
Which problem? People will phrase it differently, but the basic idea is to outsource *the hard part*, which is analysis and phrasing requirements to guide the LLM.
LLMs suck at dealing with shitty specs. They even suck at dealing with good specs. They even suck at dealing with specs they themselves suggested.
https://finkhaeuser.de/2026-04-10-outsourcing-thought-is-going-great/
So using LLMs isn't solving the problem, which is that thinking is hard.
-
@mjg59 but wait, there's more
What if you're not renowned security expert and open-source celebrity @mjg59 (that currently works at nvidia btw, profiting from the LLM boom, sorry) but just some guy trying to make ends meet doing some coding?...
Now you get an LLM mandate from your company that comes with the implication that 'either you boost your productivity with 80% or we fire you and contract a cheap prompter in your place'...
If the cheap prompter can produce the same results, what are the arguments against this?
- copyright violation in the training material
- excessively high use of the world's resources for training and inferenceIf both of those were handled (that's a big if. Maybe someday, maybe not) what were the arguments be against choosing the cheap Proctor?
-
Look, coders, we are not writers. There's no way to turn "increment this variable" into life changing prose. The creativity exists outside the code. It always has done and it always will do. Let it go.
@mjg59 you’re doing the thing where you’re romanticizing another profession by assuming the grass is greener. most writers are not novelists. most are writing pretty dry ad copy or instruction manuals or something, just like most programmers aren’t writing especially novel or beautiful algorithms (or, for that matter, video games where algorithmic processes evoke a feeling). you’re just confusing form and content here
-
If the cheap prompter can produce the same results, what are the arguments against this?
- copyright violation in the training material
- excessively high use of the world's resources for training and inferenceIf both of those were handled (that's a big if. Maybe someday, maybe not) what were the arguments be against choosing the cheap Proctor?
@seanfurey @mjg59 lmao. Assuming a total of 20 million software developers world-wide, what is the problem with firing 5-10 million people in the span of 1-2 years? You really can't think of any problem with this except the blatant copyright violations and disastrous environmental impact? Those are people my guy, they and their families need food, shelter, healthcare, and people can't just choose a new craft, let alone while competing with a couple of million in the same situation...
-
Free software people: A major goal of free software is for individuals to be able to cause software to behave in the way they want it to
LLMs: (enable that)
Free software people: Oh no not like thatif i am honest the price of such, psychotic breaks, isn't worth the freedom of per request billing
-
if i am honest the price of such, psychotic breaks, isn't worth the freedom of per request billing
@mjg59 it is a fair criticism of free software that they haven't managed to meaningfully increase people's agency over the computer
but it is a flight of fancy to suggest that extractive labor and outsourcing gives people that agency or control
even before we get to the "software that kills teenagers" part of the faustian pact
-
@jenesuispasgoth @mjg59
Some people think they can recycle FOSS from one licence to another using LLM, such as GPL2 to MIT or whatever. They are IP thieves.
All FOSS code, any so called copyleft licence, is actually copyright. Public domain code is a special case and in reality rare for anything written in the last 50 years. All of AT&T UNIX is still copyright.
Even programs or OS where the source has been made public with limitation for use is mostly still some sort of copyright.@raymaccarthy @jenesuispasgoth @mjg59 I don't much like the answer, but the assessment in the US seems to be that, yes, this laundering works if the new code is different enough.
If you sidestep the question of whether the output can be copyrighted (such as chardet did in the end) and you rename it, you're probably "good".
(Again. Me no like. And maybe different in the EU.) -
@seanfurey @mjg59 lmao. Assuming a total of 20 million software developers world-wide, what is the problem with firing 5-10 million people in the span of 1-2 years? You really can't think of any problem with this except the blatant copyright violations and disastrous environmental impact? Those are people my guy, they and their families need food, shelter, healthcare, and people can't just choose a new craft, let alone while competing with a couple of million in the same situation...
-
-
Clearly my most unpopular thread ever, so let me add a clarification: submitting LLM generated code you don't understand to an upstream project is absolute bullshit and you should never do that. Having an LLM turn an existing codebase into something that meets your local needs? Do it. The code may be awful, it may break stuff you don't care about, and that's what all my early patches to free software looked like. It's ok to solve your problem locally.
@mjg59 I don't think your points in this thread are wrong, but I'm going to gently, firmly disagree with you about the universality of your statements.
I program for many reasons, but a core reason why I enjoy it so much is that I learn new things about the problem space during the process. I treasure that. I go back to restructure my code after it works to try to share this process of discovery & learning with folks who might read my code later.
LLM coding for effect only ignores this.
1/2
-
@seanfurey @petko @mjg59 The smarter companies strive for augmentation rather than replacement. Only those who seek excuses for bad cash flow or those who genuinely have no idea what to do with higher productivity do.
That said, I do think there is an unbelievable number of those. Plus it widens the gap of those who can benefit the most, and those who can't.
The ethical concerns are "mostly" in the supply chain and the fascists selling the systems today.
-
@raymaccarthy @jenesuispasgoth @mjg59 I don't much like the answer, but the assessment in the US seems to be that, yes, this laundering works if the new code is different enough.
If you sidestep the question of whether the output can be copyrighted (such as chardet did in the end) and you rename it, you're probably "good".
(Again. Me no like. And maybe different in the EU.)@larsmb @jenesuispasgoth @mjg59
The US is the country that on the one hand has the draconian DMCA (unfair) and on the other hand said it's fine for Google to entirely scan copyright works (a totally paid for decision that isn't "fair use").
The USPTO broken since Edison.It's not a clean room re-implementation. It's automated plagiarism. I can do that in Perl or WP to a novel changing places and people. Copyright violation.
Even if you also manually transpose to a different era it might be.