I keep seeing lots of people saying "LLMs are like compilers/assemblers for prompts"
-
I keep seeing lots of people saying "LLMs are like compilers/assemblers for prompts"
Noooooooooo
NooooooooooooooooooooooooooooLLMs are not compilers, and they're not assemblers. Determinism is a key aspect to assemblers and compilers.
And they *certainly* can't be part of a reproducible pipeline
@cwebber if, just like with asm, reading and reviewing generated code is not longer a necessary thing, then the productivity bottleneck shifts to how much time is spent "engineering" the prompt.
-
@joeyh I mean real talk that's why I don't play preset seeds in roguelikes, hooked on that RNG juice
-
@ansuz @joeyh And of course there is the question, what is and isn't a compiler? Aren't all functions compilers?
Indeed, Blender's rendering system is in many ways a compiler for images.
But we don't use that way, because it's not helpful, even though Blender and ffmpeg are MORE of compilers than LLMs are. People are reaching for "LLMs might be compilers!" because of the thing they want it to *do* rather than how it *acts*, even though Blender and ffmpeg are by far, under those definitions, much more of compilers than LLMs are.
-
@ansuz @joeyh And of course there is the question, what is and isn't a compiler? Aren't all functions compilers?
Indeed, Blender's rendering system is in many ways a compiler for images.
But we don't use that way, because it's not helpful, even though Blender and ffmpeg are MORE of compilers than LLMs are. People are reaching for "LLMs might be compilers!" because of the thing they want it to *do* rather than how it *acts*, even though Blender and ffmpeg are by far, under those definitions, much more of compilers than LLMs are.
-
Ah but even if you can use a specific seed and try to use this to call it a "compiler", your compiler here is the very specific sets of weights within that model, and any change breaks its determinism. I think there being one and exactly one possible implementation to get the specified set of outputs can count as an actual compiler.
-
@cwebber If I hear "LLMs are like higher level languages" one more time I will end up on the news, i think
@eramdam@erambert.me @cwebber@social.coop Twitter tech influencers have been saying this for years already
-
I keep seeing lots of people saying "LLMs are like compilers/assemblers for prompts"
Noooooooooo
NooooooooooooooooooooooooooooLLMs are not compilers, and they're not assemblers. Determinism is a key aspect to assemblers and compilers.
And they *certainly* can't be part of a reproducible pipeline
@cwebber It's pretty simple. If it's like a compiler, then why do you check in the output? And with all the work put into making compilers more efficient (not just making the *output* more efficient), why does it take so long and require an internet connection?
-
@ansuz @cwebber @joeyh the reproducibility will also get pulled out as the model you used gets sunset. Unless all you check in is a series of prompts and a bunch of tests and simply assume future models will do a better job.
It could even be a problem where future generations want a "vintage AI" look for whatever reason and unlike so many past generations of tech, they simply won't be able to because it was a cloud service and the company is long gone.
-
I keep seeing lots of people saying "LLMs are like compilers/assemblers for prompts"
Noooooooooo
NooooooooooooooooooooooooooooLLMs are not compilers, and they're not assemblers. Determinism is a key aspect to assemblers and compilers.
And they *certainly* can't be part of a reproducible pipeline
@cwebber I'm only going to say that if natural human language was suitable for expressing expected response results in a predictable and well defined manner, we wouldn't have spent the last 50 years memorizing rulebooks that say "MUST means that the definition is an absolute requirement of the specification."
At this point my rage almost goes beyond whether it's a LLM or a Witch's Cauldron taking the prompts. I want to scream at people NATURAL LANGUAGE IS NOT A RECOMMENDABLE INPUT FORMAT.
-
I keep seeing lots of people saying "LLMs are like compilers/assemblers for prompts"
Noooooooooo
NooooooooooooooooooooooooooooLLMs are not compilers, and they're not assemblers. Determinism is a key aspect to assemblers and compilers.
And they *certainly* can't be part of a reproducible pipeline
@cwebber well, it was until C99 anyway...

-
-
@ansuz @cwebber @joeyh the reproducibility will also get pulled out as the model you used gets sunset. Unless all you check in is a series of prompts and a bunch of tests and simply assume future models will do a better job.
It could even be a problem where future generations want a "vintage AI" look for whatever reason and unlike so many past generations of tech, they simply won't be able to because it was a cloud service and the company is long gone.
@thomasjwebb @cwebber @joeyh
Local models like llama could be reworked to accept a seed for their RNG. There'd be less risk of them becoming unavailable, and they'd be both deterministic and reproducible, but they'd still be terrible for all the other reasons that LLMs are terrible .
"Sovereign" and reproducible slop is still just slop
-
I keep seeing lots of people saying "LLMs are like compilers/assemblers for prompts"
Noooooooooo
NooooooooooooooooooooooooooooLLMs are not compilers, and they're not assemblers. Determinism is a key aspect to assemblers and compilers.
And they *certainly* can't be part of a reproducible pipeline
@cwebber I think we can compromise and call them really shitty compilers.
-
I keep seeing lots of people saying "LLMs are like compilers/assemblers for prompts"
Noooooooooo
NooooooooooooooooooooooooooooLLMs are not compilers, and they're not assemblers. Determinism is a key aspect to assemblers and compilers.
And they *certainly* can't be part of a reproducible pipeline
I was thinking LLMs are like ouiji boards or tarot readings.
Semi random noise where meaning is imposed by the participating humans.
-
I keep seeing lots of people saying "LLMs are like compilers/assemblers for prompts"
Noooooooooo
NooooooooooooooooooooooooooooLLMs are not compilers, and they're not assemblers. Determinism is a key aspect to assemblers and compilers.
And they *certainly* can't be part of a reproducible pipeline
@cwebber gamified transpilers at best
-
@mntmn @cwebber I think the single interesting thing LLMs have revealed is that there is a substantial market segment who has an active desire for natural language interfaces to the computer and who will flip from "do not engage to the computer" to "engage with the computer" if a natural language interface became available.
I do not personally want a natural language interface to the computer. I also do not believe the thing LLM vendors have built is a natural language interface to the computer
@mcc @mntmn @cwebber Do you remember AskJeeves? A friend of mine worked for them, and told me that their whole thing had been natural language Web searches, but after a few years, their internal research showed that almost all their users were doing searches for literal text, or literal text connected with Boolean operators, just the way they used the other search engines. It wasn't that "natural language search" didn't work, it's that no one wanted to use it.
When I was looking up how to disable Google Assistant on my phone, a few of the articles I read opened with some claim that it was the primary reason to use an Android phone to begin with. But outside TV shows, I've rarely heard anyone trying to use it.
Corporations were trying to market GUI desktops for the Commodore 64.
I'm suspicious that there's really that much demand for natural language interfaces and skeumorphism. We've been using tools for two million years that usually don't much resemble the human body.
-
I keep seeing lots of people saying "LLMs are like compilers/assemblers for prompts"
Noooooooooo
NooooooooooooooooooooooooooooLLMs are not compilers, and they're not assemblers. Determinism is a key aspect to assemblers and compilers.
And they *certainly* can't be part of a reproducible pipeline
@cwebber "LLMs are compilers for prompts" says a lot more about someone's ignorance about compilers than about their knowledge of LLMs.
It's so stupid, it's almost wearing coconut shells on your ears and yelling into a stick and hoping that pallets of food start falling from the sky.
-
@thomasjwebb @cwebber @joeyh
Local models like llama could be reworked to accept a seed for their RNG. There'd be less risk of them becoming unavailable, and they'd be both deterministic and reproducible, but they'd still be terrible for all the other reasons that LLMs are terrible .
"Sovereign" and reproducible slop is still just slop
@ansuz @cwebber @joeyh I do want to play around with llama but that goes so against my instincts of always trying to make development put less strain on my computer (like I really hated how it feels vscode really bloated up). And while yeah, having the model and source code is certainly an improvement, my experience with getting AI/GPU stuff from the past up and running again is... not fun. Having to resurrect a 10 year old version of a model would definitely suck.
-
I keep seeing lots of people saying "LLMs are like compilers/assemblers for prompts"
Noooooooooo
NooooooooooooooooooooooooooooLLMs are not compilers, and they're not assemblers. Determinism is a key aspect to assemblers and compilers.
And they *certainly* can't be part of a reproducible pipeline
LLM maybe may be "dissembler".
-
I keep seeing lots of people saying "LLMs are like compilers/assemblers for prompts"
Noooooooooo
NooooooooooooooooooooooooooooLLMs are not compilers, and they're not assemblers. Determinism is a key aspect to assemblers and compilers.
And they *certainly* can't be part of a reproducible pipeline
@cwebber the methods used to prepare the data are similar (preprocessing, encoding, tokenization). If you turned the temperature on an LLM to 0 then it can be used to deterministically output the word with the highest probability at every step. People aren’t talking about that in this case, though.