The LLM discourse on the Fediverse has really irked me the last few days.
-
The LLM discourse on the Fediverse has really irked me the last few days.
Refusing to read writing made with the use of LLMs and refusing to give time to writers who use, promote or justify the use of LLMs is not purity culture, it's a boycott. It's a political act of withdrawing my time, resources and support for something that I find deeply morally wrong. It's protest. I have a choice and I refuse.
LLMs are exploitative, destructive, biased, mediocre parroting machines. Using them has a negative impact on the climate, the arts, the quality of the internet, the job market, the economy, the accessibility of electronics, even on skill development, creativity and mental health. LLMs are made and trained on the unpaid labour of millions -if not billions- of people who didn't consent. Their generic output litter the path to finding anything by true human creators.
Wherever I can, for as long as I can, I reject LLMs and anything that is related to them. I'm boycotting.
@reading_recluse I admit to having created songs with AI, pictures with AI, code with AI, clips of video with AI, everything more out of curiosity than nothing else. But generating text with AI, where is the fun in that...? AI generated text gives me that immediate uncanny valley effect more so than video, music, or pictures. I've quit buying the Sunday edition of a certain newspaper because reading some articles I was sure there's AI involved there. If I got that feeling reading a novel, what a disappointment that would be.
-
@lproven@social.vivaldi.net @xs4me2@mastodon.social @reading_recluse@c.im
Hasn't been my experience. What have you tested it with?
Even tiny models in the 4-12B range have been able to handle the things I need (though granted, not as well as the 24-30B range).
My use-case is saving my hands from typing up repetitive patterns, analyzing my journals on several angles (e.g. what's my average mood based on the wording I use in my journals, how does that relate to some medical things like migraines, etc.) and as a parrot that'll repeat my plans/ calendar to me in different words, so I can overcome my own biases easier.
I have found the available models entirely sufficient for these tasks.
Not for coding, though. Even the Qwen3-Coder-Next, which is an 80B behemoth just plain sucks at code.
Now to be clear - I'm not saying they're always accurate when I use LLMs. I'm saying that because I use them with data I type up by hand and am intrinsically familiar with they save me time and mental effort, because spotting problems is easy.
I wouldn't use them for any subject which I'm not already well grounded in, and in that specific way, I agree with you.
But I also wouldn't say they're extremely or dangerously bad at digesting and exploring information, as such. Not moreso than code written by juniors without supervision.
Ultimately it's on the user to ensure the tool's output meets requirements.
Anecdotally, people aren't great at processing large amounts of information either. I work in infosec, and curate a rather complex inventory/risk/audit/reporting toolkit. I pull data from over a dozen critical systems and sub-systems, networks, etc, including vast amounts of hand-written documentation, guides and explanations about how all of this works together.
I'm still the only person capable of actually using the entire toolset in concert - not even going into further development/ integrations. Others rely on Cursor/ Claude Code to use them. And that's fine by me - I'd rather have tools that get used than tools that are entirely dependent on me.
I guess my point is that in this scenario the problem isn't LLMs themselves. The problem is people who don't take time to read and understand the requirements, input and output.
(Of course, this is putting aside the ethical/ political/ economic/ ecological problems, to keep this conversation more focused on the technical merits/demerits.)@phil @lproven @reading_recluse
Exactly, and as always truth and reality are nuanced. I will be using it, and I will use my critical thinking (always).
-
@papageier @reading_recluse machine-woven cloth was answering an essential need in a profitable capitalistic way. Can we say the same about LLM?
I think it is not inevitable, but time will tell.
@tseitr @papageier @reading_recluse My problem with this framing is: who gets to decide?
Define 'essential'. Is a new generation of MacBooks 'essential'? Not really. The ones we have are amazing. But nobody's boycotting the progress being made in chip design.
But the anti-LLM crowd seem to have decided: not having LLMs is 'enough'. Having them is superfluous. They're not 'needed'.
I get the pushback. I'll never use one to write prose, because prose comes from my human heart.
But to deny their utility in the world of code generation is to be dogmatic. The vast, vast majority of code generation isn't art: it's the rote stitching together of existing pieces to make a new thing.
Claude is _much_ better at that than I am. If properly controlled by me the result is better and more secure.
So, I use Claude. Just like I use an IDE and a higher-level language and just like I deploy to an edge network run by someone else vs. standing up my own. Because doing that is better than not doing that.
-
The LLM discourse on the Fediverse has really irked me the last few days.
Refusing to read writing made with the use of LLMs and refusing to give time to writers who use, promote or justify the use of LLMs is not purity culture, it's a boycott. It's a political act of withdrawing my time, resources and support for something that I find deeply morally wrong. It's protest. I have a choice and I refuse.
LLMs are exploitative, destructive, biased, mediocre parroting machines. Using them has a negative impact on the climate, the arts, the quality of the internet, the job market, the economy, the accessibility of electronics, even on skill development, creativity and mental health. LLMs are made and trained on the unpaid labour of millions -if not billions- of people who didn't consent. Their generic output litter the path to finding anything by true human creators.
Wherever I can, for as long as I can, I reject LLMs and anything that is related to them. I'm boycotting.
@reading_recluse This take bugs me so much. Calling boycotting of LLM's 'purity culture' is the dumbest ass take since Dems smeared Bernie as a sexist.
-
@phil @xs4me2 @reading_recluse My current favourite paper on this:
https://ea.rna.nl/2024/05/27/when-chatgpt-summarises-it-actually-does-nothing-of-the-kind/
@lproven @phil @reading_recluse
There is no substitute for reading the final material of your subject to study by yourself. Line by line and internalizing it. I remember the days of our paper scientific library where I would stay a whole afternoon and would review Phys Rev B, Applied Physics, Applied Optics and more on the topic of my research and in the end had a stack of paper copies I took home to read. Basically that has not changed by online use but got so much more fast and efficient.
-
@lproven @dynamite_ready @reading_recluse
In my opinion, you are incorrect here, and a user is always responsible for digesting the assumed truth as they observe it. Especially on tools. There is no substitute for critical thinking. And there never will be.
Truth and social surrounds are infinitesimally more complex than analyzing a game of chess.
@xs4me2 @lproven @dynamite_ready @reading_recluse
What you're essentially suggesting here, is that LLMs are only good for consuming information if the user either already has the knowledge to judge output (in which case, why are they asking?) or spends time to verify the claims that the LLM makes (in which case, why bother asking the LLM?).
I've seen them make some pretty important mistakes, including suggesting that a Director who wasn't on the call being summarised had authorised something
-
@xs4me2 @lproven @dynamite_ready @reading_recluse
What you're essentially suggesting here, is that LLMs are only good for consuming information if the user either already has the knowledge to judge output (in which case, why are they asking?) or spends time to verify the claims that the LLM makes (in which case, why bother asking the LLM?).
I've seen them make some pretty important mistakes, including suggesting that a Director who wasn't on the call being summarised had authorised something
@ben @lproven @dynamite_ready @reading_recluse
I am suggesting that a competent user can use tools in the right way indeed and only by their in-depth knowledge of them. You can call that craftsmanship, experience, or simply domain knowledge.
It does not imply that tools nor LLM are useless, nor that they are without danger. A sharp chisel can cut off your finger. A poorly configured LLM can provide you with a load of nonsense...
-
@phil @xs4me2 @reading_recluse My current favourite paper on this:
https://ea.rna.nl/2024/05/27/when-chatgpt-summarises-it-actually-does-nothing-of-the-kind/
@lproven@social.vivaldi.net @xs4me2@mastodon.social @reading_recluse@c.im
1. Paper from nearly 2 years ago. A lot has changed. Not to mention the 'test' the author (can't find their name, sorry) did is pretty dumb. It's much better to use an API, where you can control the full input pipeline to ensure the vendor isn't adding hidden instructions without your knowledge.
2. I already addressed the point in my previous comment - it's on the user to verify that tools have correct output. Relying on an LLM to do the reading in one's stead is a recipe for disaster.
You haven't said anything about YOUR use-case, experience, or the tests you tried.
I'm genuinely curious, what do you imagine using an LLM is like?
The reason I ask is because a lot of the criticism and panicking (sometimes crossing into outright disrespect and bigotry) I see online comes from an assumption that using an LLM is predicated on turning off one's brain and taking the output at face value... something that we shouldn't be doing with any software anyway.
I guess put another way: I don't believe that the problems people attribute to LLMs are specific to LLMs. How many instances were there where management/ execs took Excel output as fact, when the formulas were set up wrong?
These statistical models are no different. -
The LLM discourse on the Fediverse has really irked me the last few days.
Refusing to read writing made with the use of LLMs and refusing to give time to writers who use, promote or justify the use of LLMs is not purity culture, it's a boycott. It's a political act of withdrawing my time, resources and support for something that I find deeply morally wrong. It's protest. I have a choice and I refuse.
LLMs are exploitative, destructive, biased, mediocre parroting machines. Using them has a negative impact on the climate, the arts, the quality of the internet, the job market, the economy, the accessibility of electronics, even on skill development, creativity and mental health. LLMs are made and trained on the unpaid labour of millions -if not billions- of people who didn't consent. Their generic output litter the path to finding anything by true human creators.
Wherever I can, for as long as I can, I reject LLMs and anything that is related to them. I'm boycotting.
@reading_recluse Completely going d'accors. Also LLM produced "art" is so dull. I don't want to read it. For some reason my brain starts to shut down when reading an LLM produced text. I forget the picture as soon as I close it. Same with music. AI generated voices are so grating. The artificiality of it all makes me mad. It doesn't challenge me, it doesn't tell me anything, there is nothing intentional behind it. It's just - nothing. And it destroys the environment.
-
The LLM discourse on the Fediverse has really irked me the last few days.
Refusing to read writing made with the use of LLMs and refusing to give time to writers who use, promote or justify the use of LLMs is not purity culture, it's a boycott. It's a political act of withdrawing my time, resources and support for something that I find deeply morally wrong. It's protest. I have a choice and I refuse.
LLMs are exploitative, destructive, biased, mediocre parroting machines. Using them has a negative impact on the climate, the arts, the quality of the internet, the job market, the economy, the accessibility of electronics, even on skill development, creativity and mental health. LLMs are made and trained on the unpaid labour of millions -if not billions- of people who didn't consent. Their generic output litter the path to finding anything by true human creators.
Wherever I can, for as long as I can, I reject LLMs and anything that is related to them. I'm boycotting.
@reading_recluse u do u
-
The LLM discourse on the Fediverse has really irked me the last few days.
Refusing to read writing made with the use of LLMs and refusing to give time to writers who use, promote or justify the use of LLMs is not purity culture, it's a boycott. It's a political act of withdrawing my time, resources and support for something that I find deeply morally wrong. It's protest. I have a choice and I refuse.
LLMs are exploitative, destructive, biased, mediocre parroting machines. Using them has a negative impact on the climate, the arts, the quality of the internet, the job market, the economy, the accessibility of electronics, even on skill development, creativity and mental health. LLMs are made and trained on the unpaid labour of millions -if not billions- of people who didn't consent. Their generic output litter the path to finding anything by true human creators.
Wherever I can, for as long as I can, I reject LLMs and anything that is related to them. I'm boycotting.
@reading_recluse What disgusts me is the total disconnect from the natural world and the devastating effects of human activity in most forms on nature. We are hurtling toward ecocide and massive planetary collapse of current life forms. And what do they do? Grasp and exploit and posture and perform and strut in their massive ignorance of how a closed, interdependent, symbiotic living system actually works. The human supremacy religion means the death of all of us and a magical world full of beauty and wonder gone before its time.
-
@reading_recluse What disgusts me is the total disconnect from the natural world and the devastating effects of human activity in most forms on nature. We are hurtling toward ecocide and massive planetary collapse of current life forms. And what do they do? Grasp and exploit and posture and perform and strut in their massive ignorance of how a closed, interdependent, symbiotic living system actually works. The human supremacy religion means the death of all of us and a magical world full of beauty and wonder gone before its time.
@fergabell Completely true, I fully agree.
I really dislike that most LLM-defenders in my comments right now say something like: "Well actually, in this specific case LLM usage was actually helpful for me personally, so..."
Even entertaining the thought that it's somehow useful for someone somewhere, it doesn't erase the extreme damage it's doing to the world and us collectively, and the massive scale of exploitation it's engaging in to keep it all afloat.
-
@xs4me2 @lproven @dynamite_ready @reading_recluse
What you're essentially suggesting here, is that LLMs are only good for consuming information if the user either already has the knowledge to judge output (in which case, why are they asking?) or spends time to verify the claims that the LLM makes (in which case, why bother asking the LLM?).
I've seen them make some pretty important mistakes, including suggesting that a Director who wasn't on the call being summarised had authorised something
@ben @xs4me2 @dynamite_ready @reading_recluse No, that is not what I am suggesting at all.
You are trying to interpret my position on this through the lens of what *you* think they are good for.
-
@ben @lproven @dynamite_ready @reading_recluse
I am suggesting that a competent user can use tools in the right way indeed and only by their in-depth knowledge of them. You can call that craftsmanship, experience, or simply domain knowledge.
It does not imply that tools nor LLM are useless, nor that they are without danger. A sharp chisel can cut off your finger. A poorly configured LLM can provide you with a load of nonsense...
@xs4me2 @ben @dynamite_ready @reading_recluse And I am disagreeing with that. I'm saying they are not appropriate for this stuff, whoever uses them and regardless of how they use them.
-
The LLM discourse on the Fediverse has really irked me the last few days.
Refusing to read writing made with the use of LLMs and refusing to give time to writers who use, promote or justify the use of LLMs is not purity culture, it's a boycott. It's a political act of withdrawing my time, resources and support for something that I find deeply morally wrong. It's protest. I have a choice and I refuse.
LLMs are exploitative, destructive, biased, mediocre parroting machines. Using them has a negative impact on the climate, the arts, the quality of the internet, the job market, the economy, the accessibility of electronics, even on skill development, creativity and mental health. LLMs are made and trained on the unpaid labour of millions -if not billions- of people who didn't consent. Their generic output litter the path to finding anything by true human creators.
Wherever I can, for as long as I can, I reject LLMs and anything that is related to them. I'm boycotting.
@reading_recluse i feel pretty much the same, save to say, its not to concept of LLMs that I'm against, rather it is the theft of material for training, the impunity of that theft and the determination to disclaim any possibility of giving fair payment or recognition to those whose work is responsible for the stolen data.
on top, i really really dislike the cultish hype and forced use going on
-
The LLM discourse on the Fediverse has really irked me the last few days.
Refusing to read writing made with the use of LLMs and refusing to give time to writers who use, promote or justify the use of LLMs is not purity culture, it's a boycott. It's a political act of withdrawing my time, resources and support for something that I find deeply morally wrong. It's protest. I have a choice and I refuse.
LLMs are exploitative, destructive, biased, mediocre parroting machines. Using them has a negative impact on the climate, the arts, the quality of the internet, the job market, the economy, the accessibility of electronics, even on skill development, creativity and mental health. LLMs are made and trained on the unpaid labour of millions -if not billions- of people who didn't consent. Their generic output litter the path to finding anything by true human creators.
Wherever I can, for as long as I can, I reject LLMs and anything that is related to them. I'm boycotting.
@reading_recluse For me, it doesn't make sense to think about LLMs in pure dogmatic categories like "in favor" or "against". Fact is, LLMs are out there now and won't just disappear, and they CAN be powerful and useful tools if used in a reasonable way. The problem is that a lot of people are currently overusing it and don't reflect enough about when and how to use it, which leads to a lot of AI-generated crap. Maybe humanity just needs more time to finally find a good balance of AI usage.
-
@tseitr @papageier @reading_recluse My problem with this framing is: who gets to decide?
Define 'essential'. Is a new generation of MacBooks 'essential'? Not really. The ones we have are amazing. But nobody's boycotting the progress being made in chip design.
But the anti-LLM crowd seem to have decided: not having LLMs is 'enough'. Having them is superfluous. They're not 'needed'.
I get the pushback. I'll never use one to write prose, because prose comes from my human heart.
But to deny their utility in the world of code generation is to be dogmatic. The vast, vast majority of code generation isn't art: it's the rote stitching together of existing pieces to make a new thing.
Claude is _much_ better at that than I am. If properly controlled by me the result is better and more secure.
So, I use Claude. Just like I use an IDE and a higher-level language and just like I deploy to an edge network run by someone else vs. standing up my own. Because doing that is better than not doing that.
@johnnydecimal @tseitr @papageier @reading_recluse
"nobody's boycotting the progress being made in chip design"[waving hand]
Over here.
We're boycotting chips that offer us nothing more that we want or need.
Run the web browser, word processor, printer drivers, scan drivers, network connections, do security updates. And don't make the humans waste time with the damned computers. It's a lot to ask but new chips are not going to do this any better. -
@xs4me2 @ben @dynamite_ready @reading_recluse And I am disagreeing with that. I'm saying they are not appropriate for this stuff, whoever uses them and regardless of how they use them.
@lproven @ben @dynamite_ready @reading_recluse
Let us respectfully disagree then.
You are right in the sense that a lot can go wrong as I elaborated on!
Time will tell!
-
The LLM discourse on the Fediverse has really irked me the last few days.
Refusing to read writing made with the use of LLMs and refusing to give time to writers who use, promote or justify the use of LLMs is not purity culture, it's a boycott. It's a political act of withdrawing my time, resources and support for something that I find deeply morally wrong. It's protest. I have a choice and I refuse.
LLMs are exploitative, destructive, biased, mediocre parroting machines. Using them has a negative impact on the climate, the arts, the quality of the internet, the job market, the economy, the accessibility of electronics, even on skill development, creativity and mental health. LLMs are made and trained on the unpaid labour of millions -if not billions- of people who didn't consent. Their generic output litter the path to finding anything by true human creators.
Wherever I can, for as long as I can, I reject LLMs and anything that is related to them. I'm boycotting.
@reading_recluse exactly!
As long as I can, I will resist. And to be honest, I don’t really care what people think of it
-
The LLM discourse on the Fediverse has really irked me the last few days.
Refusing to read writing made with the use of LLMs and refusing to give time to writers who use, promote or justify the use of LLMs is not purity culture, it's a boycott. It's a political act of withdrawing my time, resources and support for something that I find deeply morally wrong. It's protest. I have a choice and I refuse.
LLMs are exploitative, destructive, biased, mediocre parroting machines. Using them has a negative impact on the climate, the arts, the quality of the internet, the job market, the economy, the accessibility of electronics, even on skill development, creativity and mental health. LLMs are made and trained on the unpaid labour of millions -if not billions- of people who didn't consent. Their generic output litter the path to finding anything by true human creators.
Wherever I can, for as long as I can, I reject LLMs and anything that is related to them. I'm boycotting.
@reading_recluse Add Corporate LLM and I'll agree, not generalizing A.I. as a whole Infrastructure that exist since earlier then you think.
The debate alone gets annoying, sure you can tell your opinions but it's a hype overflow lately that gets on the nerves of many people mind you.
Think what you want and do not use it at all must be satisfying enough while I agree corp AI is pure trash and immoral, not all AI is.I wish you a good day ahead

edit: Give room to others to explore and exchange how to make it better for all instead shooting it, while the gun is not the weapon in that case but you that trigger it is same as comparing it to a nuke or fire.