This blogpost makes an astoundingly good case about LLMs I hadn't considered before.
-
@futzle @cwebber
When newbies encounter toxicity for asking their question on a public forum, cannot really blame them for turning to a LLM.
https://youtu.be/N7v0yvdkIHg@bornach @futzle @cwebber not just newbies. I'm 35 years in this work so while this is not "my first rodeo" I regularly have to work on something completely new to me. What a lot of these pricks don't understand is that many of us don't have the time to deep dive into their pet platform, framework, tool, or language, and we don't know how to ask the "right" questions. But still, they're at least human and with a little patience you might just tease the right answer out of them.
-
@KatS If you ask me, the first thing to do is to ensure everyone understands how AI is a fascist project.
This also counters anti-AI criticism, namely that not all AI is the same. The key question is always "how does the use of AI in this case disenfranchise people?". (Same for "the cloud", btw, but people are even less willing to hear that.)
This is a conversation that can be had with non-techies.
"It makes my job easier" is a good argument for AI. On the other hand, ...
-
@KatS If you ask me, the first thing to do is to ensure everyone understands how AI is a fascist project.
This also counters anti-AI criticism, namely that not all AI is the same. The key question is always "how does the use of AI in this case disenfranchise people?". (Same for "the cloud", btw, but people are even less willing to hear that.)
This is a conversation that can be had with non-techies.
"It makes my job easier" is a good argument for AI. On the other hand, ...
@KatS... "it makes my work experience useless and devalues me as an employee" is a really bad sign.
Note that work requirements *changing* is the tricky bit here. The only constant is change. But change doesn't have to devalue your contribution.
Moving from pen to typewriter to computer didn't devalue the writer. Moving to LLMs does.
And then, it's unions.
-
This blogpost makes an astoundingly good case about LLMs I hadn't considered before. The collapse of public forums (like Stack Overflow) for programming answers coincides directly with the rise of programmers asking for answers from chatbots *directly*. Those debugging sessions become part of a training set that now *only private LLM corporations have access to*. This is something that "open models" seemingly can't easily fight. https://michiel.buddingh.eu/enclosure-feedback-loop
yup.
Competition for dominance necessitates enclosure of the commons, limiting the use of extremely valuable common human dimensions to just the aggressive (or aggressively funded), precluding creative potentialities except only when championed by insiders in line with corporate financial models.
Over and over, humanity suffers profound losses. -
@cwebber of course guys - it was never about the LLM, it was about crowd-sourcing intelligence at an epic-scale. Every piece of code a developer writes and fixes becomes training data. Same with every conversation. I'm surprised people don't see the danger in having one single overlord and gatekeeper of all information in the world. Its crazy.
People seem to have forgotten what the real meaning of democracy and multi-lateralism are.
@mahadevank @cwebber Forget trying to explain that. The "experts" at Davos laid it out for everyone. Yet, somehow they're still optimistic that one entity dominating all others, essentially destroying competition, will bring forth a world of opportunities. It's an all out war, and anyone that doesn't have the resources to insert XYZ's brain into their stack, is just a foot soldier for those that do.
-
This blogpost makes an astoundingly good case about LLMs I hadn't considered before. The collapse of public forums (like Stack Overflow) for programming answers coincides directly with the rise of programmers asking for answers from chatbots *directly*. Those debugging sessions become part of a training set that now *only private LLM corporations have access to*. This is something that "open models" seemingly can't easily fight. https://michiel.buddingh.eu/enclosure-feedback-loop
@cwebber I'm so glad I stumbled upon this thread, pointing out the fascist nature of the global AI race so many are calling the great democratizer. With enough critical thinkers, maybe civilization will come to its senses before hyperscale data centers become this era's pyramids to explore in the future.
-
This blogpost makes an astoundingly good case about LLMs I hadn't considered before. The collapse of public forums (like Stack Overflow) for programming answers coincides directly with the rise of programmers asking for answers from chatbots *directly*. Those debugging sessions become part of a training set that now *only private LLM corporations have access to*. This is something that "open models" seemingly can't easily fight. https://michiel.buddingh.eu/enclosure-feedback-loop
@cwebber And here I was thinking "docs in Discord" was bad enough.
-
@cwebber you calling it an 'astoundingly good case' makes me feel insightful in a way no LLM has been able to accomplish. I'm going to be insufferably smug for the rest of the day

@michiel Haha, you deserve it! An angle I hadn't considered, it really shook me up and I spent a ton of time thinking about it since.
-
This blogpost makes an astoundingly good case about LLMs I hadn't considered before. The collapse of public forums (like Stack Overflow) for programming answers coincides directly with the rise of programmers asking for answers from chatbots *directly*. Those debugging sessions become part of a training set that now *only private LLM corporations have access to*. This is something that "open models" seemingly can't easily fight. https://michiel.buddingh.eu/enclosure-feedback-loop
@cwebber I've been saying this for a while. Bubble or not, our profession (and/or vocation, if you prefer) is screwed.
-
This blogpost makes an astoundingly good case about LLMs I hadn't considered before. The collapse of public forums (like Stack Overflow) for programming answers coincides directly with the rise of programmers asking for answers from chatbots *directly*. Those debugging sessions become part of a training set that now *only private LLM corporations have access to*. This is something that "open models" seemingly can't easily fight. https://michiel.buddingh.eu/enclosure-feedback-loop
@cwebber as in many other fields, we have to have real communities who care about stuff.
-
@cwebber I've been saying this for a while. Bubble or not, our profession (and/or vocation, if you prefer) is screwed.
@datarama Possibly, though I worry less about professions/vocations than I do about user empowerment. I have long assumed that some day programmer salaries would be unsustainable.
Of course the irony is that many people are shilling LLM services as being empowerment systems. I see them as the opposite. Open, community developed LLMs could be, but LLM-as-a-service corporations are definitively not.
-
@datarama Possibly, though I worry less about professions/vocations than I do about user empowerment. I have long assumed that some day programmer salaries would be unsustainable.
Of course the irony is that many people are shilling LLM services as being empowerment systems. I see them as the opposite. Open, community developed LLMs could be, but LLM-as-a-service corporations are definitively not.
@cwebber By vocation, I also mean "people who like to write software".
If I lost my job but still had that, I'm sure I could become a happy store clerk or train driver who hacked on community software in my free time. But in AI Hell, we can't even have that. My option is to become a miserable store clerk or train driver (until that too is automated away) who consumes AI-generated slop forever. And that is what is coming for all of us - current-day programmers are just going to get there first.
(Incidentally, I make less than a third of what people on the internet tell me American software developers with my level of experience do - but I'm no more or less screwed than they are.)
-
@datarama Possibly, though I worry less about professions/vocations than I do about user empowerment. I have long assumed that some day programmer salaries would be unsustainable.
Of course the irony is that many people are shilling LLM services as being empowerment systems. I see them as the opposite. Open, community developed LLMs could be, but LLM-as-a-service corporations are definitively not.
@cwebber And the problem is, LLM development is *extremely* capital-intensive. Unless you have a "community" of billionaires, it's going to be very hard to make anything that can compete with the hyperscalers.
-
@cwebber By vocation, I also mean "people who like to write software".
If I lost my job but still had that, I'm sure I could become a happy store clerk or train driver who hacked on community software in my free time. But in AI Hell, we can't even have that. My option is to become a miserable store clerk or train driver (until that too is automated away) who consumes AI-generated slop forever. And that is what is coming for all of us - current-day programmers are just going to get there first.
(Incidentally, I make less than a third of what people on the internet tell me American software developers with my level of experience do - but I'm no more or less screwed than they are.)
@datarama @cwebber I can attest that it's still possible to hack on free software in your spare time if you lose the tech job, but you get a heck of a lot less free time to do it in. And a heck of a lot less energy to do it with. All against a billionaire-induced media backdrop of your primary interest now being irrelevant, which is demoralizing.
But if you can find the time and maintain the energy, there is still a community even more stubborn than in the "GPL is a cancer" days.
-
@datarama @cwebber I can attest that it's still possible to hack on free software in your spare time if you lose the tech job, but you get a heck of a lot less free time to do it in. And a heck of a lot less energy to do it with. All against a billionaire-induced media backdrop of your primary interest now being irrelevant, which is demoralizing.
But if you can find the time and maintain the energy, there is still a community even more stubborn than in the "GPL is a cancer" days.
@randomgeek @cwebber It's *possible*, of course, but it all feels rather pointless now.
And everything you make and share freely is appropriated to improve the Immiseration Machine.
-
@cwebber but also, as uninviting as the stack overflow culture may have been, the moderators were there to try to get people to ask better questions. I doubt llms will handle things like x/y problem issues, so to me it seems things will get worse for people able/willing to pay as well.
@martijn @cwebber IMHO stackoverflow may have been toxic, but it was a sort of forum with low friction access (easy to search, easy to ask, easy to reply) where you interacted WITH PEOPLE.
People is key. I remember names from the linux-kernel list in the mid-90s - I joined Mastodon in 2022 and found that same people here.
Whatever site or forum or network or anything we build, I want to read from people, not bots.
-
@randomgeek @cwebber It's *possible*, of course, but it all feels rather pointless now.
And everything you make and share freely is appropriated to improve the Immiseration Machine.
-
@randomgeek @cwebber I'm also autistic. (though I'm Danish, so the *least* famously crazy kind of Scandinavian.
)In the beginning of all this, I thought and felt much the same. Now I just feel drained and defeated.
Because yes, the struggle itself is enough to fill a human heart and we must imagine Sisyphus happy. But it sucks to be Sisyphus when someone put up a ski lift next to him.
-
This blogpost makes an astoundingly good case about LLMs I hadn't considered before. The collapse of public forums (like Stack Overflow) for programming answers coincides directly with the rise of programmers asking for answers from chatbots *directly*. Those debugging sessions become part of a training set that now *only private LLM corporations have access to*. This is something that "open models" seemingly can't easily fight. https://michiel.buddingh.eu/enclosure-feedback-loop
@cwebber I think there is a flaw with the theory that big AI can use this shift from forum to chatbot to train new models. The thing that makes Stack Overflow valuable is not the question but having an expert(s) provide an answer, and a mechanism for others to add weight to it being correct.
Interactions with LLMs really don't have the same feedback loop. They collect the questions from the users, but there is no expert to provide the answer to train from. I suppose there's some training data there, but not nearly as direct as what was originally scraped from SO.
I suspect training future models is going to be much more challenging.
-
This blogpost makes an astoundingly good case about LLMs I hadn't considered before. The collapse of public forums (like Stack Overflow) for programming answers coincides directly with the rise of programmers asking for answers from chatbots *directly*. Those debugging sessions become part of a training set that now *only private LLM corporations have access to*. This is something that "open models" seemingly can't easily fight. https://michiel.buddingh.eu/enclosure-feedback-loop
@cwebber yet another externality for the bot lickers to ignore when they say "ethical and environmental issue aside..." and praise the occasionally useful slop that the stochastic slotmachine gives them as they burn billions of tokens in gas town.