Free software people: A major goal of free software is for individuals to be able to cause software to behave in the way they want it toLLMs: (enable that)Free software people: Oh no not like that
-
@mnl @david_chisnall @mjg59 @ignaloidas I do use search engines, but if I don't recognize or can easily get context about the sites listed, it's now nearly impossible to trust the results. It used to be possible (creating content was costly so well written content was usually the mark or someone at least a bit invested on the subject, but in those case I used to cross check several hits) it's not anymore.
LLMs: without knowing the source of the answer, how could it be
It's just plausible.@ced @david_chisnall @mjg59 @ignaloidas which search engine do you use? I use @kagihq and it’s always a pleasure.
Llms can provide information about sources. If they tell me that Shannon said x in his thesis on p.463 I can look it up. If they tell me that variable foo is on line X in file Y, I can easily verify it. If they think that Z compiles, I don’t even need to cross check that, the computer can do it for me. In fact verifying certain assumptions about code might be the easiest of them all, which is why llms are quite effective at writing code.
-
Clearly my most unpopular thread ever, so let me add a clarification: submitting LLM generated code you don't understand to an upstream project is absolute bullshit and you should never do that. Having an LLM turn an existing codebase into something that meets your local needs? Do it. The code may be awful, it may break stuff you don't care about, and that's what all my early patches to free software looked like. It's ok to solve your problem locally.
@mjg59 telling people that they shouldn't care about the things they care about is generally unpopular, yes
-
@mnl@hachyderm.io @mjg59@nondeterministic.computer @david_chisnall@infosec.exchange @newhinton@troet.cafe the training objective is not "be correct", so that's not what the models are trained on. They aren't trained on such an objective because there's no way to score it - if you had a system that could determine whether a statement was correct, then you could just use that. No, what the models are trained on are globs of existing text, targeting the continuations to be the same as the text. Notably, most(all?) LLM makers don't even care whether most of the text is "correct" (in any sense sense of the word), and "solve" it by training on some more carefully selected globs of text. And in the end, what the model itself outputs are probabilities of a specific token (not even a sentence or something) to be next. The text you get is all just dice rolls on those probabilities, again and again.
It is a text prediction machine. A very powerful one, but it's just a prediction. It just picks whatever is likely, with no regard with what is correct@ignaloidas @mjg59 @david_chisnall @newhinton that’s also not how current llms work, there is a significant amount of post-training using RL being done, and that too is a whole field of research.
Furthermore, current llm-based tools usually do multiple round of inference interspersed with more traditional “tool calls” (or, as I prefer to call it, interpreting sampled tokens in a deterministic/formal manner).
-
Personally I'm not going to literally copy code from a codebase under an incompatible license because that is what the law says, but have I read proprietary code and learned the underlying creative aspect and then written new code that embodies it? Yes! Anyone claiming otherwise is lying!
@mjg59 Learning from and adapting ideas from unlicensed code into new code is an accommodation under law for humans. If you built a machine to do this at scale, however, that's a choice to leverage a humane decision into a profitable hack.
-
@ced @david_chisnall @mjg59 @ignaloidas which search engine do you use? I use @kagihq and it’s always a pleasure.
Llms can provide information about sources. If they tell me that Shannon said x in his thesis on p.463 I can look it up. If they tell me that variable foo is on line X in file Y, I can easily verify it. If they think that Z compiles, I don’t even need to cross check that, the computer can do it for me. In fact verifying certain assumptions about code might be the easiest of them all, which is why llms are quite effective at writing code.
@ced @david_chisnall @mjg59 @ignaloidas @kagihq to the search engine thing, one reason I think that they’re usually more problematic to use is that there’s actually incentives to make results worse. I switched to Kagi from google/duckduckgo before ChatGPT because the results were already complete trash.
Sure, I have to pay by the search, but that’s the only business model that at least enables non-gameable results.
-
@ced @david_chisnall @mjg59 @ignaloidas which search engine do you use? I use @kagihq and it’s always a pleasure.
Llms can provide information about sources. If they tell me that Shannon said x in his thesis on p.463 I can look it up. If they tell me that variable foo is on line X in file Y, I can easily verify it. If they think that Z compiles, I don’t even need to cross check that, the computer can do it for me. In fact verifying certain assumptions about code might be the easiest of them all, which is why llms are quite effective at writing code.
@mnl @david_chisnall @mjg59 @ignaloidas @kagihq
sure, but if I have to check every sentence, because even if 99 of them are correct I can't trust that the 100th will, doesn't it quite defeat the point? If I'm not reading a primary source, I have to be sure that I can trust the synthesis (at least to a point). With LLMs I can't. -
@ignaloidas @mjg59 @david_chisnall @newhinton that’s also not how current llms work, there is a significant amount of post-training using RL being done, and that too is a whole field of research.
Furthermore, current llm-based tools usually do multiple round of inference interspersed with more traditional “tool calls” (or, as I prefer to call it, interpreting sampled tokens in a deterministic/formal manner).
@mnl@hachyderm.io @mjg59@nondeterministic.computer @david_chisnall@infosec.exchange @newhinton@troet.cafe all of that training is still continuation based because that is what the models predict. Yes, there is a bunch of research, and honestly, most of it is banging head against fundamental issues of the model, but is still being funded because LLMs are at the end of it all, quite useless if they just spit nonsense from time to time and it's indistinguishable from sensible stuff without carefully cross-checking it all.
Tool calls are just that - tools to add stuff into the context for further prediction, but they in no way do anything to make sure that the LLM output is correct, because once again - everything is treated as a continuation after the tool call, and it's just predicting, what's the most likely thing to do, not what's the correct thing to do. -
When I write code I am turning a creative idea into a mechanical embodiment of that idea. I am not creating beauty. Every line of code I write is a copy of another line of code I've read somewhere before, lightly modified to meet my needs. My code is not intended to evoke emotion. It does not change people think about the world. The idea→code pipeline in my head is not obviously distinguishable from the prompt->code process in an LLM
> When I write code I am turning a creative idea into a mechanical embodiment of that idea. I am not creating beauty
When *I* code, I am creating beauty, or at least trying to.
I hope each proof/program I write is as close to the proof from "the book" has possible. At a Pareto optimum of simplicity and elegance.
-
@mnl @david_chisnall @mjg59 @ignaloidas @kagihq
sure, but if I have to check every sentence, because even if 99 of them are correct I can't trust that the 100th will, doesn't it quite defeat the point? If I'm not reading a primary source, I have to be sure that I can trust the synthesis (at least to a point). With LLMs I can't.@ced I just read the primary source when I think it’s useful to do so
-
@mnl@hachyderm.io @mjg59@nondeterministic.computer @david_chisnall@infosec.exchange @newhinton@troet.cafe all of that training is still continuation based because that is what the models predict. Yes, there is a bunch of research, and honestly, most of it is banging head against fundamental issues of the model, but is still being funded because LLMs are at the end of it all, quite useless if they just spit nonsense from time to time and it's indistinguishable from sensible stuff without carefully cross-checking it all.
Tool calls are just that - tools to add stuff into the context for further prediction, but they in no way do anything to make sure that the LLM output is correct, because once again - everything is treated as a continuation after the tool call, and it's just predicting, what's the most likely thing to do, not what's the correct thing to do.@ignaloidas @mjg59 @david_chisnall @newhinton do you blindly trust code just because it’s been written by a human? Or your own code for that matter? I don’t, and yet I am able to produce hopefully useful software. In fact I have to trust an immense amount of software without verifying it, based on vibes. For llms at least I can benchmark the vibes, or at least more easily gather empirical observations than with humans.
-
Look, coders, we are not writers. There's no way to turn "increment this variable" into life changing prose. The creativity exists outside the code. It always has done and it always will do. Let it go.
Pragmatic standpoint is completely valid, but don't forget why do we have writing systems: to convey information. That's the basic need. So taking the same pragmatic approach we don't need writers nor poets nor prose or anything of sorts: language exists to transfer data from human to human, and don't you dare to find any of that serialization into english/anything beautiful. Is that it?
-
@ignaloidas @mjg59 @david_chisnall @newhinton do you blindly trust code just because it’s been written by a human? Or your own code for that matter? I don’t, and yet I am able to produce hopefully useful software. In fact I have to trust an immense amount of software without verifying it, based on vibes. For llms at least I can benchmark the vibes, or at least more easily gather empirical observations than with humans.
@mnl@hachyderm.io @mjg59@nondeterministic.computer @david_chisnall@infosec.exchange @newhinton@troet.cafe Not blindly, of course, but I build up trust relationships with people I work with. And I do trust my own code to a certain extent. I can't trust a bunch of dice. The fact that you don't trust your own code at all honestly tells me all I ever need to know about you.
-
@mnl@hachyderm.io @mjg59@nondeterministic.computer @david_chisnall@infosec.exchange @newhinton@troet.cafe Not blindly, of course, but I build up trust relationships with people I work with. And I do trust my own code to a certain extent. I can't trust a bunch of dice. The fact that you don't trust your own code at all honestly tells me all I ever need to know about you.
@ignaloidas @mjg59 @david_chisnall @newhinton how did you gain your confidence? How can you call machine learning a bunch of dice? I try to study and build things everyday and yes I don’t trust my code at all, which I think is a healthy attitude to have? I am definitely not able to produce perfect code on the first try.
-
@mjg59 You will get backlash, but you are right.
Free software folks will have to decide whether what they really wanted was *everyone* to have the freedom to use and modify software, or only that subset of everyone who had the privilege of learning software development.
There has always been this elitist dividing line in the community between people who contribute code, and people who contribute all the other things FOSS needs to thrive. Now those people can contribute code too.
@kyle @mjg59 Proprietary tooling is the reason "Stallman was right" about Bitkeeper, but "everyone was better off for having not listened to him" is the pragmatic side.
Yes, I want people to benefit from the freedom to modify code, but they will never truly be free if they are using a proprietary LLM to make their modifications. -
@mjg59 Yeah, as soon as there‘s an ethically sourced and trained free LLM that‘s not controlled by very shitty companies I‘m totally on board with you.
Until then we shouldn’t let that shit near our projects.
@chris_evelyn
What do you mean by "ethically sourced and trained"?
@mjg59 -
@engideer @david_chisnall @mjg59 @ignaloidas I don’t think llms are “rando”. They have randomized elements during training and inference, but they’re not a random number generator. I also would trust a “rando” less than an expert in real life. I wouldn’t trust either blindly either.
-
@mjg59 What you propose is actually illegal, even if the law doesn’t make much sense. I wonder if you ever had the cops sent after you on a corp-run IP case… maybe it would make you feel different?
@promovicz
Let's hope the AI lobby will (in any combination of purposely and inadvertently) make that law obsolete.
@mjg59 -
Free software people: A major goal of free software is for individuals to be able to cause software to behave in the way they want it to
LLMs: (enable that)
Free software people: Oh no not like that@mjg59 I think the issue is more on the forcing of LLMs/AI in *everything* right now, not specifically F/OSS projects. It reeks of dot-com bubble era marketing and in many cases is completely unnecessary.
-
@engideer @david_chisnall @mjg59 @ignaloidas I don’t think llms are “rando”. They have randomized elements during training and inference, but they’re not a random number generator. I also would trust a “rando” less than an expert in real life. I wouldn’t trust either blindly either.
@engideer @david_chisnall @mjg59 @ignaloidas also I didn’t say anything of what you quoted, and I don’t know where you got it from.
-
@ignaloidas @mjg59 @david_chisnall @newhinton how did you gain your confidence? How can you call machine learning a bunch of dice? I try to study and build things everyday and yes I don’t trust my code at all, which I think is a healthy attitude to have? I am definitely not able to produce perfect code on the first try.
@mnl@hachyderm.io @mjg59@nondeterministic.computer @david_chisnall@infosec.exchange @newhinton@troet.cafe through repeated checks and knowledge that humans are consistent.
And like, really, you don't trust your code at all? I, for example, know that the code I wrote is not going to cheat by unit tests, not going to re-implement half of the things from scratch when I'm working on a small feature, nor will it randomly delete files. After working with people for a while, I can be fairly sure that the code they've written can be trusted to the same standards. LLMs can't be trusted with these things, and in fact have been documented to do all of these things.
It is not a blind, absolute trust, but trust within reason. The fact that I have to explain this to you is honestly embarrassing.