Spend the day talking to workers council members about "AI".
-
@glyph the amount of times where I asked a CEO/CTO about their "AI" project and how they actually measure cost or what their measurable criteria for success are and only got someone looking at me as if I was speaking in tongues is really scary.
Like: Isn't turning everything into metrics and measurements in order to make data driven decisions what management is supposed to do?
Mostly 'Managers' don't have a clue.
Sales run rings round them, with half truths and promises.
Tech-Staff have to clean up the mess, underpaid,
often without adequate training.It is the history of #britain , the charge of the light brigade enacted time after time, Tommy Atkins in the trenches, the many wounded in the Boer War
Incompetence of management. Upper class twits.
Thin red line, you aren't allowed to duck.AI will fail eventually - bad management
-
Spend the day talking to workers council members about "AI". And it's kinda wild hearing their stories from the wild: Management is 100% in fantasy "AI" can do everything land and makes huge plans for how to use "AI" to cut workers when real projects that supposedly can do 50% of a specific task end up being able to do 8%. And they still go live. It's fucking bonkers. CEO's are really not okay.
@tante I struggle to remember a time in my life when CEOs were "okay"
but what's goin on right now, phew
-
@tante do you have a link to that framework?
Also: https://labornotes.org/2026/03/four-union-strategies-fight-ai
In #australia the #actu the peak #union body has betrayed it's members by signing a deal with #microslop just as they fired 15,000 public servants, due to #ai encroachment.
I've done some work analysing the betrayal by the union.
#regulateai #unionstrong -
-
Spend the day talking to workers council members about "AI". And it's kinda wild hearing their stories from the wild: Management is 100% in fantasy "AI" can do everything land and makes huge plans for how to use "AI" to cut workers when real projects that supposedly can do 50% of a specific task end up being able to do 8%. And they still go live. It's fucking bonkers. CEO's are really not okay.
@tante
"workers council"
(very Soviet/Debord) -
Spend the day talking to workers council members about "AI". And it's kinda wild hearing their stories from the wild: Management is 100% in fantasy "AI" can do everything land and makes huge plans for how to use "AI" to cut workers when real projects that supposedly can do 50% of a specific task end up being able to do 8%. And they still go live. It's fucking bonkers. CEO's are really not okay.
I'm genuinely starting to wonder if not LinkedIn has a significant share of the blame for this. There is a certain strain of brain-rot (not just related to AI) that seems to have unreasonably prevalent in management, and I'm not sure what other contamination vector there might be.
-
Spend the day talking to workers council members about "AI". And it's kinda wild hearing their stories from the wild: Management is 100% in fantasy "AI" can do everything land and makes huge plans for how to use "AI" to cut workers when real projects that supposedly can do 50% of a specific task end up being able to do 8%. And they still go live. It's fucking bonkers. CEO's are really not okay.
@tante Goals for this year from the top: everyone should use AI. We shall find at least one good use case for AI per team. So much bullshit.
-
Spend the day talking to workers council members about "AI". And it's kinda wild hearing their stories from the wild: Management is 100% in fantasy "AI" can do everything land and makes huge plans for how to use "AI" to cut workers when real projects that supposedly can do 50% of a specific task end up being able to do 8%. And they still go live. It's fucking bonkers. CEO's are really not okay.
@tante
The people most likely to be destabilized by LLMs are the people most insulated from contradiction, and executives are professionally insulated from contradiction. -
@aud @tante @glyph well they do have metrics, it's just that they're generally ad-hoc and terrible metrics
and even when they aren't, Goodhart's Law ensures that relying on them turns the exercise into farce relatively soon.
arguably that kind of farce is the entire history of the false spring: "simply scale it up" worked surprisingly well, then worked surprisingly well again, and therefore we can extrapolate that it will work forever and [financial irresponsibility] and oops now it's not working anymore oh shit oh fuck uhhhh AGENTS, we're doing agents now! Yea, that's the ticket. (and so on)
@SnoopJ @aud @tante there so many people who are really, actually offering incentives and bonuses for *token use* though. Like it's not just a thing that is happening somewhere, it seems to be one of the more *common* mechanisms.
I was sure when I first heard about this that it must be some kind of self-dealing kickback scam? But as far as I can tell… no? It's just a thing that managers *actually* think is a good idea? Literally incentivizing direct waste by employees
-
@jaredwhite @tante why wouldn't they? The people who bullshit for a living are (ironically) not threatened, they're having the time of their lives instead
@ehproque @jaredwhite @tante I've read some messed up stuff today - but this could be the most terrifying.
-
Which was really fucked up to see: These folks actually want to protect their organizations from burning a lot of resources on bullshit instead of fixing actual problems that help the workers _and_ the organization. And they have to actively fight management who got their brains ruined on linkedin.
@tante I can’t get an answer to a simple question for the last few months: „what’s the goal and how will you know we’ve achieved it specifically thanks to AI?”
Because if the work I’ve been doing to remove obstacles to productivity for the last year and a half will get attributed to this bullshit, I’ll start complying maliciously.
-
"AI is going to make us more productive at shipping our software."
"Great! Amazing! That must be several phd theses you got there! Well done! Didn't know you had it in you."
"?!?"
"Well, I mean, you must have figured out how to measure software development productivity reliably, right? What's our baseline at?"
@larsmb I didn’t know how much I wanted to scream until I read this…
-
Spend the day talking to workers council members about "AI". And it's kinda wild hearing their stories from the wild: Management is 100% in fantasy "AI" can do everything land and makes huge plans for how to use "AI" to cut workers when real projects that supposedly can do 50% of a specific task end up being able to do 8%. And they still go live. It's fucking bonkers. CEO's are really not okay.
@tante what a time to be alive. I'd be interested in seeing how you frame the discussions.
-
Spend the day talking to workers council members about "AI". And it's kinda wild hearing their stories from the wild: Management is 100% in fantasy "AI" can do everything land and makes huge plans for how to use "AI" to cut workers when real projects that supposedly can do 50% of a specific task end up being able to do 8%. And they still go live. It's fucking bonkers. CEO's are really not okay.
-
But: If you have any chance to speak to unions/workers from different domains and organizations do so.
It's fascinating how
a) different organizations are and operate
b) they all end up with the same handful of structural problemsYour remarks make me think that employees could make a proposal to investors, and here I am making a pretty big assumption, that they can run the company better than management. They can plan to say this to investors after the first major disaster.
The assumption I’m making here is that the investors are all interested in the company doing well rather than soaking money out of it by playing stock movements based on AI
-
@glyph the amount of times where I asked a CEO/CTO about their "AI" project and how they actually measure cost or what their measurable criteria for success are and only got someone looking at me as if I was speaking in tongues is really scary.
Like: Isn't turning everything into metrics and measurements in order to make data driven decisions what management is supposed to do?
-
Spend the day talking to workers council members about "AI". And it's kinda wild hearing their stories from the wild: Management is 100% in fantasy "AI" can do everything land and makes huge plans for how to use "AI" to cut workers when real projects that supposedly can do 50% of a specific task end up being able to do 8%. And they still go live. It's fucking bonkers. CEO's are really not okay.
@tante it's really depressing. i would love to find some sane people around me. the management proudly and loudly says they are "#AIpilled".
today i was on a meeting where people are supposed to discuss their use-cases and experiences with using LLMs and someone mentioned that they tried to do a financial analysis but the chatbot hallucinated all the results including stock prices. then someone told them they can just explicitly ask it to not make anything up and it will be much better. like, the guys are flipping billions of dollars in investments and real people are getting fired because their work now supposedly can be done by tools that you are supposed to nicely ask to pretty please not make anything up... what the actual fuck.
-
@aud @tante @glyph well they do have metrics, it's just that they're generally ad-hoc and terrible metrics
and even when they aren't, Goodhart's Law ensures that relying on them turns the exercise into farce relatively soon.
arguably that kind of farce is the entire history of the false spring: "simply scale it up" worked surprisingly well, then worked surprisingly well again, and therefore we can extrapolate that it will work forever and [financial irresponsibility] and oops now it's not working anymore oh shit oh fuck uhhhh AGENTS, we're doing agents now! Yea, that's the ticket. (and so on)
The thing about agents, from what I understand in talking to vendors about using them, is that to use them correctly you have to build very detailed and specific playbooks for them to "follow".
In practice, it seems like most people just think you can Claude your way to success with vibes and vaguery.
They seem to think having an agent eliminates the hard part: defining your process in clear language. In truth, it's more important because an agent won't have the "common sense" to not delete and recreate your production database at 4:30 on a Friday before a three day weekend. Or just delete it.
This is not even including the identity and access boundaries you need. Like, we are having deep discussions about an agentic solution that would just read help desk tickets and make suggestions to the help desk personnel. We have to consider all the ways prompt injection could abuse its access. And when the agentic AI is telling people what to do, that's a prime target for social engineering. They want it to be able to reboot servers. That's a denial of service attack waiting to happen.
An outside vendor we've spent lots of money on is trying to sell us a multi-agent system that management is already in love with and we have to educate them on the almost unfathomable risk it would create. How are they forgetting everything they've ever learned about risk modeling, threats, fraud, attack surfaces, least privilege, etc. These are not stupid people, but they are acting like wide-eyed children just because it has the word "AI" attached to it. They should be more skeptical, not less.
-
The thing about agents, from what I understand in talking to vendors about using them, is that to use them correctly you have to build very detailed and specific playbooks for them to "follow".
In practice, it seems like most people just think you can Claude your way to success with vibes and vaguery.
They seem to think having an agent eliminates the hard part: defining your process in clear language. In truth, it's more important because an agent won't have the "common sense" to not delete and recreate your production database at 4:30 on a Friday before a three day weekend. Or just delete it.
This is not even including the identity and access boundaries you need. Like, we are having deep discussions about an agentic solution that would just read help desk tickets and make suggestions to the help desk personnel. We have to consider all the ways prompt injection could abuse its access. And when the agentic AI is telling people what to do, that's a prime target for social engineering. They want it to be able to reboot servers. That's a denial of service attack waiting to happen.
An outside vendor we've spent lots of money on is trying to sell us a multi-agent system that management is already in love with and we have to educate them on the almost unfathomable risk it would create. How are they forgetting everything they've ever learned about risk modeling, threats, fraud, attack surfaces, least privilege, etc. These are not stupid people, but they are acting like wide-eyed children just because it has the word "AI" attached to it. They should be more skeptical, not less.
@jrdepriest @SnoopJ @aud @tante "what if we could get rid of everything that *wasn't* an insider threat. what if the entire inside was made of threats? would that fix it?"
-
@jrdepriest @SnoopJ @aud @tante "what if we could get rid of everything that *wasn't* an insider threat. what if the entire inside was made of threats? would that fix it?"
@jrdepriest @SnoopJ @aud @tante like I'm trying to make light of it with little jokes but that is LITERALLY WHAT IS GOING ON in an absolutely WILD number of places