Spend the day talking to workers council members about "AI".
-
Spend the day talking to workers council members about "AI". And it's kinda wild hearing their stories from the wild: Management is 100% in fantasy "AI" can do everything land and makes huge plans for how to use "AI" to cut workers when real projects that supposedly can do 50% of a specific task end up being able to do 8%. And they still go live. It's fucking bonkers. CEO's are really not okay.
-
But: If you have any chance to speak to unions/workers from different domains and organizations do so.
It's fascinating how
a) different organizations are and operate
b) they all end up with the same handful of structural problemsYour remarks make me think that employees could make a proposal to investors, and here I am making a pretty big assumption, that they can run the company better than management. They can plan to say this to investors after the first major disaster.
The assumption I’m making here is that the investors are all interested in the company doing well rather than soaking money out of it by playing stock movements based on AI
-
@glyph the amount of times where I asked a CEO/CTO about their "AI" project and how they actually measure cost or what their measurable criteria for success are and only got someone looking at me as if I was speaking in tongues is really scary.
Like: Isn't turning everything into metrics and measurements in order to make data driven decisions what management is supposed to do?
-
Spend the day talking to workers council members about "AI". And it's kinda wild hearing their stories from the wild: Management is 100% in fantasy "AI" can do everything land and makes huge plans for how to use "AI" to cut workers when real projects that supposedly can do 50% of a specific task end up being able to do 8%. And they still go live. It's fucking bonkers. CEO's are really not okay.
@tante it's really depressing. i would love to find some sane people around me. the management proudly and loudly says they are "#AIpilled".
today i was on a meeting where people are supposed to discuss their use-cases and experiences with using LLMs and someone mentioned that they tried to do a financial analysis but the chatbot hallucinated all the results including stock prices. then someone told them they can just explicitly ask it to not make anything up and it will be much better. like, the guys are flipping billions of dollars in investments and real people are getting fired because their work now supposedly can be done by tools that you are supposed to nicely ask to pretty please not make anything up... what the actual fuck.
-
@aud @tante @glyph well they do have metrics, it's just that they're generally ad-hoc and terrible metrics
and even when they aren't, Goodhart's Law ensures that relying on them turns the exercise into farce relatively soon.
arguably that kind of farce is the entire history of the false spring: "simply scale it up" worked surprisingly well, then worked surprisingly well again, and therefore we can extrapolate that it will work forever and [financial irresponsibility] and oops now it's not working anymore oh shit oh fuck uhhhh AGENTS, we're doing agents now! Yea, that's the ticket. (and so on)
The thing about agents, from what I understand in talking to vendors about using them, is that to use them correctly you have to build very detailed and specific playbooks for them to "follow".
In practice, it seems like most people just think you can Claude your way to success with vibes and vaguery.
They seem to think having an agent eliminates the hard part: defining your process in clear language. In truth, it's more important because an agent won't have the "common sense" to not delete and recreate your production database at 4:30 on a Friday before a three day weekend. Or just delete it.
This is not even including the identity and access boundaries you need. Like, we are having deep discussions about an agentic solution that would just read help desk tickets and make suggestions to the help desk personnel. We have to consider all the ways prompt injection could abuse its access. And when the agentic AI is telling people what to do, that's a prime target for social engineering. They want it to be able to reboot servers. That's a denial of service attack waiting to happen.
An outside vendor we've spent lots of money on is trying to sell us a multi-agent system that management is already in love with and we have to educate them on the almost unfathomable risk it would create. How are they forgetting everything they've ever learned about risk modeling, threats, fraud, attack surfaces, least privilege, etc. These are not stupid people, but they are acting like wide-eyed children just because it has the word "AI" attached to it. They should be more skeptical, not less.
-
The thing about agents, from what I understand in talking to vendors about using them, is that to use them correctly you have to build very detailed and specific playbooks for them to "follow".
In practice, it seems like most people just think you can Claude your way to success with vibes and vaguery.
They seem to think having an agent eliminates the hard part: defining your process in clear language. In truth, it's more important because an agent won't have the "common sense" to not delete and recreate your production database at 4:30 on a Friday before a three day weekend. Or just delete it.
This is not even including the identity and access boundaries you need. Like, we are having deep discussions about an agentic solution that would just read help desk tickets and make suggestions to the help desk personnel. We have to consider all the ways prompt injection could abuse its access. And when the agentic AI is telling people what to do, that's a prime target for social engineering. They want it to be able to reboot servers. That's a denial of service attack waiting to happen.
An outside vendor we've spent lots of money on is trying to sell us a multi-agent system that management is already in love with and we have to educate them on the almost unfathomable risk it would create. How are they forgetting everything they've ever learned about risk modeling, threats, fraud, attack surfaces, least privilege, etc. These are not stupid people, but they are acting like wide-eyed children just because it has the word "AI" attached to it. They should be more skeptical, not less.
@jrdepriest @SnoopJ @aud @tante "what if we could get rid of everything that *wasn't* an insider threat. what if the entire inside was made of threats? would that fix it?"
-
@jrdepriest @SnoopJ @aud @tante "what if we could get rid of everything that *wasn't* an insider threat. what if the entire inside was made of threats? would that fix it?"
@jrdepriest @SnoopJ @aud @tante like I'm trying to make light of it with little jokes but that is LITERALLY WHAT IS GOING ON in an absolutely WILD number of places
-
@jrdepriest @SnoopJ @aud @tante "what if we could get rid of everything that *wasn't* an insider threat. what if the entire inside was made of threats? would that fix it?"
@glyph@mastodon.social @SnoopJ@hachyderm.io @jrdepriest@infosec.exchange @tante@tldr.nettime.org "when you think about it, if the call is coming from inside the house, you've really limited the amount of space you have to search to find the threat"
-
Your remarks make me think that employees could make a proposal to investors, and here I am making a pretty big assumption, that they can run the company better than management. They can plan to say this to investors after the first major disaster.
The assumption I’m making here is that the investors are all interested in the company doing well rather than soaking money out of it by playing stock movements based on AI
@GhostOnTheHalfShell @tante You’re describing an Employee Stock Ownership Plan (ESOP) and when they don’t work out it’s ugly. All your eggs in one basket, etc.
-
@jrdepriest @SnoopJ @aud @tante "what if we could get rid of everything that *wasn't* an insider threat. what if the entire inside was made of threats? would that fix it?"
-
@glyph@mastodon.social @SnoopJ@hachyderm.io @jrdepriest@infosec.exchange @tante@tldr.nettime.org "when you think about it, if the call is coming from inside the house, you've really limited the amount of space you have to search to find the threat"
@SnoopJ@hachyderm.io @jrdepriest@infosec.exchange @tante@tldr.nettime.org @glyph@mastodon.social "what does vertical integration mean to ME? To me, vertical integration is when all of your threats are insider threats. Now that's talking about shareholder value with corporate power."
-
@GhostOnTheHalfShell @tante You’re describing an Employee Stock Ownership Plan (ESOP) and when they don’t work out it’s ugly. All your eggs in one basket, etc.
-
@glyph the amount of times where I asked a CEO/CTO about their "AI" project and how they actually measure cost or what their measurable criteria for success are and only got someone looking at me as if I was speaking in tongues is really scary.
Like: Isn't turning everything into metrics and measurements in order to make data driven decisions what management is supposed to do?
that's the story. they want us to believe it's not all ego and greed and vibe management. but ask for actual data and metrics that aren't just a pretty graphy unrelated to reality and they will likely get defensive.
don't question the emperor's new clothes. offer to design their next outfit...
-
Which was really fucked up to see: These folks actually want to protect their organizations from burning a lot of resources on bullshit instead of fixing actual problems that help the workers _and_ the organization. And they have to actively fight management who got their brains ruined on linkedin.
@tante Oddly enough, I don't even think I find this truly unusual. Nonsense management is common. Some company cultures amplify it. I'll be curious about the post-mortem on this bubble...
-
Spend the day talking to workers council members about "AI". And it's kinda wild hearing their stories from the wild: Management is 100% in fantasy "AI" can do everything land and makes huge plans for how to use "AI" to cut workers when real projects that supposedly can do 50% of a specific task end up being able to do 8%. And they still go live. It's fucking bonkers. CEO's are really not okay.
@tante A friend of mine works at a company with ~30,000 employees in a division comprised primarily of subject matter experts, each with their own specialization. Their AI usage is tracked and every week they sit through three hours of meetings pushing them to use more AI. Despite this their entire division of a couple thousand employees being forbidden from using AI in their work due to intellectual property concerns. When they have called out the conflict, they are told to use it to make their administrative tasks more efficient despite those tasks consuming a small fraction of their time.
-
Spend the day talking to workers council members about "AI". And it's kinda wild hearing their stories from the wild: Management is 100% in fantasy "AI" can do everything land and makes huge plans for how to use "AI" to cut workers when real projects that supposedly can do 50% of a specific task end up being able to do 8%. And they still go live. It's fucking bonkers. CEO's are really not okay.
@tante The entire tech sector is run by idiots who are convinced that they’re geniuses. As far as I’m concerned, there’s only one solution for this.
-
But it was super fun to lead them through a "this is how you can force reasonable evaluation on 'AI' projects which kills most of them" framework and see how they felt empowered and able to actually do their job again.
@tante My partner is trying (often in vain) to have this type conversation with her supervisor and other colleagues who have succumbed to the siren song of AI.
I gather she hasn’t been too successful for the moment, but she isn’t alone. Perhaps she and some of her like-minded colleagues will be able to coordinate their efforts to concentrate on the project evaluation angle that you suggest.
At any rate, thank you for sharing your experience!
-
@otherdog @tante I guess I'll drop the link again just for reference if you haven't seen it, I didn't do so above because I feel like I post this every single day now to the point where the self-promotion feels shameful. but it remains painfully, almost nauseatingly relevant, so, here you go https://blog.glyph.im/2025/08/futzing-fraction.html
-
But it was super fun to lead them through a "this is how you can force reasonable evaluation on 'AI' projects which kills most of them" framework and see how they felt empowered and able to actually do their job again.
@tante
Thanks for your work in this matter and for sharing it! It's important.Can you point me in a direction where I can learn more about the framework you're describing or the foundations below?
I also try to keep my organization sane and unfortunately I don't seem to be successful in that regard.
Edit: Just read you don't have your framework fully formalized/written down yet and I definitely understand time is scarce, thanks for what you already shared.

-
But it was super fun to lead them through a "this is how you can force reasonable evaluation on 'AI' projects which kills most of them" framework and see how they felt empowered and able to actually do their job again.
@tante is this framework available somewhere? I'd like to use it in my org