Machine translations are often brought up as a gotcha whenever I criticize LLMs.
-
@geolaw @Tacas @Gargron their subtitles for the English dub are also terrible in a way that can only be machine generated without any oversight
Besides "misheard" words, missing punctuation and names changing their spelling it is also never clear who says what or when a different person starts talking
I hate it so much (and I'm at least able to notice when they are wrong since I'm not deaf and can understand the audio in most cases. It must be worse for others) -
Technology is not inevitable. We've decided not to have asbestos in our walls, lead in our pipes, or carginogenic chemicals in our food. (If you're going to argue that it's not everywhere, where would you rather live?) We could just not do LLMs. It's allowed.
@Gargron It's hard to put the brakes on advances, like the Ghost Shirt Society finds out at the end of Vonnegut's Player Piano.
I heard an interview with a professor yesterday who wrote a book on the benefits of keeping cash alive and not relying completely on digital payment systems. He suggested using cash at least once a week. Maybe people will be able to do that with AI - limit their use and rely on their own brains at least some of the time. https://blogs.bu.edu/zagorsky/
-
Machine translations are often brought up as a gotcha whenever I criticize LLMs. It's worth pointing out two things: Machine translations existed decades before LLMs, and yes, machine translations are useful. However: I would never in my life read a machine translated book. Understanding what a social media post is talking about in rough terms? Sure. Literature? Absolutely not. Hell, have you ever seen machine translated subtitles? It's absolute garbage.
@Gargron This was even true in the 1600s, when the Companies of (human) Translators were translating the Bible into English (the so-called "King James" version, 1611).
Translations of human language require the ability to translate the _sense_ of some local or regional usage into something similar in the target language.
They include a footnote indicating that one passage was essentially untranslatable, because the phrase was not understood by anyone. So they used context instead.
-
@Gargron we also had Concorde but it wasn’t economically viable. I mention that because I find that economic arguments seem to be heard more readily than moral arguments. (I often find that moral arguments induce temporary deafness in pro-AI people.)
@benedictc @Gargron imagine the cost of the subscription if all of those companies worked with real money and had to turn a profit from the start.
Imagine that they had to pay real copyright fees for all the content used in training the models.
Imagine that any of the illegal uses of the training data and the people that died using their products had meaningful consequences in court.
Imagine that they had to pay the full tax, the full price of the services that they use.
-
Transformers are neural networks.
LLMs are transformers wrapped in some Python scripting.
Every neural network can be accurately represented as an Excel sheet, even if it ends up having billions of cells.
Since it's just addition and multiplication, the model is fully deterministic. Same input, same output. Not intelligent.
It's Python code that does probabilistic sampling of the output. It's just a few lines of well-understood math plus a dice roll. Again, not intelligent.
-
@Gargron LLMs are not exclusively a product of large corporations or just marketing. Much of the research and development also takes place in open source and academic communities. The codes for these LLMs are public and can be audited or run locally. Furthermore, I argue that serious ethical reflection is necessary, but prohibition is not the way forward.
@df
Consciously not using something ≠ prohibition
Edit: Also, who cares who worked/ envisioned or works on this now? If you think about LLMs enough, you will likely see enough good arguments about the resource waste, centralization of power and multiplication of slop which describe LLMs. We lived without it before and we can live without it in future times. -
@Gargron would you know if you've seen a good outcome of an LLM? You'd somehow be able to identify when the LLM got it right?
I assure you you've experienced good LLM output and don't even know it. Because that's what good LLM output looks like. Indistinguishable from human output.
Your examples are perhaps false equivalencies. Take asbestos. We didn't abolish insulation. We developed better, safer insulation. We didn't stop dying food colors, we just developed safer dyes etc.
Let me ask you this: It's your birthday.
5 of your friends met some days before and wrote a song for you. It's not really good, the text doesn't even rhyme...but they did this for you and they had fun.
They enjoyed the act of creating.5 other friends wrote a prompt and pressed a button to generate a song.
Which song will you remember?
-
Technology is not inevitable. We've decided not to have asbestos in our walls, lead in our pipes, or carginogenic chemicals in our food. (If you're going to argue that it's not everywhere, where would you rather live?) We could just not do LLMs. It's allowed.
@Gargron
I could not agree more -
@Gargron It is a technology that humanity has been seeking for a long time. At least since the 1950s, with Turing and his colleagues.
-
@grishka One problem with LLMs is that they tend to translate and summarise what’s likely to be in the source text, not what’s actually in the text.
This means that when translating/summarising a text that deviates from the usual content in a subject or genre, the LLM will push it towards the common.
Using the result to understand the original contents is therefore very risky. For example, when screening texts, ”incorrect” content might be ”corrected”, increasing the likelihood it will pass.
-
Let me ask you this: It's your birthday.
5 of your friends met some days before and wrote a song for you. It's not really good, the text doesn't even rhyme...but they did this for you and they had fun.
They enjoyed the act of creating.5 other friends wrote a prompt and pressed a button to generate a song.
Which song will you remember?
-
@Tekchip my walls are full of art by humans that some would call terrible... who the fuck cares? they have love and craft and pain and power from the hands and soul of a human creator. they are beautiful. i fucking love bad art.
slop generation is the nothingness.
just write your toot from your heart, fuck the machine. being human is fine.
@Gargron -
Technology is not inevitable. We've decided not to have asbestos in our walls, lead in our pipes, or carginogenic chemicals in our food. (If you're going to argue that it's not everywhere, where would you rather live?) We could just not do LLMs. It's allowed.
failed technologies, like Zeppelin
-
@ClipHead @melioristicmarie @Gargron which this?
"there is no value in the average."
or
"my walls are full of art by humans that some would call terrible... who the fuck cares?"
Can't have it both ways.
-
@Gargron It is a technology that humanity has been seeking for a long time. At least since the 1950s, with Turing and his colleagues.
LLMs are Shannon 1948 as far as the theory goes (building on Markov, but adding computer technology). With some compression techniques.
But I think you're talking about something else entirely, not purely syntactical.
-
-
@Gargron would you know if you've seen a good outcome of an LLM? You'd somehow be able to identify when the LLM got it right?
I assure you you've experienced good LLM output and don't even know it. Because that's what good LLM output looks like. Indistinguishable from human output.
Your examples are perhaps false equivalencies. Take asbestos. We didn't abolish insulation. We developed better, safer insulation. We didn't stop dying food colors, we just developed safer dyes etc.
@Tekchip @Gargron the tiny potential for very rare good outcomes are not worth the constant poisoning of humanity's collective information corpus.
For every "good" generated content there are dozens of thousands of terrible slop that are difficult to separate from genuine useful information or material when doing research or code reviews, etc.
Not to mention that these "good" outcomes are much costlier to humanity than creating by hand, with no benefit.
-
@ClipHead @melioristicmarie @Gargron which this?
"there is no value in the average."
or
"my walls are full of art by humans that some would call terrible... who the fuck cares?"
Can't have it both ways.
@Tekchip
so... is this a slop account? am i tooting with cheapgpt?are you a human playing with toys you do not comprehend?
dear dogs, may i have the confidence of a mediocre "white" man.
so... l.l.m.s tokenize english text... and then calculate an average.
humans making shitty art is qualitatively perfection in comparison to word salad from a calculator. when you enter this into wannabe deep seek... i will be waiting with bated breath for the token response. ; )
-
Technology is not inevitable. We've decided not to have asbestos in our walls, lead in our pipes, or carginogenic chemicals in our food. (If you're going to argue that it's not everywhere, where would you rather live?) We could just not do LLMs. It's allowed.
@Gargron where is the perceptron
-
@df No, this is marketing. OpenAI, Google, Anthropic &co want you to believe that what they're doing is artificial intelligence. My professional opinion is that LLMs are a dead end technology to creating actual intelligence. And if any of those companies did create actual intelligence for the purposes they pursue, it would be slavery, for which I cannot advocate.
@Gargron they'll never create intelligence because intelligence requires will and they do not understand will. they dont even posses one of their own: their own behaviour is driven by feelings and shaped by a commercial playbook. there is zero chance they will ever create intelligence.