Saying that you do not want GenAI in the #books you read, the things you watch or the games you play is an understandable and NORMAL position.
-
@berniethewordsmith bulkiest destkop, my newest one, a ryzen 5 with rtx 5070, it's not huge but it's like 10 cm taller and 5cm wider than my old one. Bulkiest laptop, a 2008 celeron upgraded to core2duo, now inactive because the charger died two months ago (probably because the core2duo takes almost twice the power).
Still should have an even bulkier desktop at my old place, a 2005-ish athlon I set up on a tower I found in the garbage, never measured it but was like 2x standard pc towers. It was empty except some SCSI cables so I guess it was used for (literal) mass storage or mass cd/dvd burning.
@cygnathreadbare I have a lot of respect for your craft
-
@rrb @WellsiteGeo @Remittancegirl I know that if I Google this I will find a real life example and that is a fact that scares me about the world
@berniethewordsmith @WellsiteGeo @Remittancegirl not meaning to give people ideas. The homeless have enough problems
-
@alterelefant @HollieK72 @pluralistic called this "Habsburg AI" and I find the name incredibly fitting
@berniethewordsmith
Yes, that one. Always something fun to read.From what I understand this is a great challenge for all companies out there trying to assemble a high quality dataset to train the next generation of ai. Humans generate new content however ai generates much more new content and you simply do not want ai generated output to end up in your ai trainingset as you know the quality of that content is sub par and unreliable. The irony.
-
@alterelefant @MyricaGale At least in previous iterations of "was online, must be true" you had an actual creative human inventing the bullshit. There was passion in the lie

@berniethewordsmith
When people try to convince the reader that something is true where it isn't, your 'bullshit meter' should be able to pick up on that and say, hold on, that doesn't add up.It feels like some people are not critical enough towards #LLM's pulling the same tricks as it mimics human text writing. Those individuals probably also weren't too critical about those human generated stories to begin with.
@MyricaGale -
@alterelefant @berniethewordsmith And nobody will be responsible for those accidents, because the AI itself can't be responsible, and the people who make it, sell it, and market it as able to do things it can't really do will be able to get off consequence-free because of an asterisk in the fine print.
@Linebyline
Yes, accountability doesn't seem to be a thing here.
@berniethewordsmith -
@hopeless I decided to leave the coding part aside on the thread just for the fact that I am not a coder and I do not want to make assumptions about something that I do not know a lot of. But I gotta say I'm a bit concerned about all of that vibe coding I hear about. Although I'm almost certain that the thing you are speaking about and vibe coding are not the same
@berniethewordsmith Vibe coding is really the same problem you identified, they asked for a whole program, like a "book" or a "script", and used what came out of the AI wholesale. AI took the most direct, unmaintainable, hacky way to do what was asked.
The coding AIs can give fast, excellent results if you ask it for something narrower on top of a project that's already well written and high quality, which naturally guides it to "fit in". Even then it needs some manual cleaning to be usable.
-
@alterelefant @HollieK72 @pluralistic called this "Habsburg AI" and I find the name incredibly fitting
@berniethewordsmith @alterelefant @HollieK72 Stole it from Jathan Sadowski of "This Machine Kills"
-
1. Nobody said shit about Trump. But since you asked, he sucks too.
2. Yep, we do not like all those things. You got it. Since when abundance of reasons is an unreasonable position.i said "i dont like" because i dont like those things. .
i also dont like "shitty novels" like you mentioned. But i dont care where the "shit" came out from. "Shit is shit" you know. And i have seen a lot of "shitty novels". Also a LLm is not even capable of writing a novel on it's own. So you need people to tell them what to write about. A computer is also not capable of shitting, so the shit is always humanmade.
Seems like you both like to talk about "shit"
-
i said "i dont like" because i dont like those things. .
i also dont like "shitty novels" like you mentioned. But i dont care where the "shit" came out from. "Shit is shit" you know. And i have seen a lot of "shitty novels". Also a LLm is not even capable of writing a novel on it's own. So you need people to tell them what to write about. A computer is also not capable of shitting, so the shit is always humanmade.
Seems like you both like to talk about "shit"
"i dont like shitty novels unless they are selfwritten" is a really bad argument against AI.
-
"i dont like shitty novels unless they are selfwritten" is a really bad argument against AI.
@TheOneSwit @ojelabii It's a perfect argument in favor of art
-
M malte@radikal.social shared this topic
-
@TheOneSwit @ojelabii It's a perfect argument in favor of art
the same for guttenberg
have you ever seen someone binding a book and how much craftmanship you need for?
Ai is a tool. There are lot of reasons to dislike AI or even to prohibit the massive use, shitty novels is not a good reason.
-
@michael @berniethewordsmith What are LLMs, then, if not merely next-token prediction? Obviously they're more sophisticated than a markov chain, but if they're more than prediction, *what* more?
Also you're flat-out wrong about artists. A great many of them hate AI enough that they won't use it on principle, and avoid tools that might sneak it into their workflow without their consent.
Technically, LLMs of course just generate text one token at a time. But it's an over-simplification that somebody just loaded it up with reddit, project Gutenberg, and PirateBay and use that to generate text similar to what it has seen. As you say, an advanced Markov generator.
LLMs do get trained on all that more or less ethically obtained text, but that's just step 1. Next is fine-tuning and last reinforcement learning. Reinforcement is what makes the difference: the model is trained not just to produce text it has seen, but text that makes the trainer happy.
Part of what makes the trainer happy is that the answer is correct (or reinforces their existing bias). Sometimes that's more or less verbatim quoting a reddit shitpost, sometimes that's picking out points from an academic paper, and sometimes that's just random gibberish formatted so it looks right.
But at the core, an LLM has 3 things: a model of what language looks like, a non-trivial percentage of human knowledge in the form of the internet, and an incentive to provide answers that made the trainer happy.
We do not know what that model looks like, except in very simple cases. It works surprisingly well for what it is but is not intelligent.
People didn't hate Google Translate or image search before it was rebranded as AI. It uses simpler versions of LLMs. Artists often make use of Photoshop features like "select subject" or "context-aware fill," which are run by neural networks. There is much more to AI than just generative AI and parts of it are useful.
The problem is not LLMs (a priori, ignoring the ethics of how they are trained), it's the AI companies having to hype them up. IMO, the correct response is not claiming they are not useful for anything – they obviously are to a lot of people – it's to challenge what they are used for. -
B bogwitch@social.data.coop shared this topic