u programmed this with claude?
-
u programmed this with claude? the ai platform for bombing schools?
@jacqueline Claude's actually the one who refused to do that, unless I'm missing something? ChatGPT is the school-bomber.
-
@jacqueline I don't think they used Claude for that, more like ChatGPT and not checking the phone book.
-
u programmed this with claude? the ai platform for bombing schools?
@jacqueline imh this is the same as saying "You are using the same pen as they did sign the Nuremberg laws with". People are accountable for these atrocities, AIs are just another (problematic) tool enabling bad people to do bad things more efficiently. Don't forget who is truly responsible here.
-
u programmed this with claude? the ai platform for bombing schools?
@jacqueline Not advocating for any of them, but Anthropic famously refused to work with the Department of War. I think you were thinking of OpenAI
-
u programmed this with claude? the ai platform for bombing schools?
@jacqueline On some news they mentioned it was given old data about some military HQ. The old saying about software applies to AI too. Potatoes in, potatoes out... In this case it was tragically potatoes in, dead children out... And that's why these systems should not be used for war, nor for anything else critical.
-
@jacqueline On some news they mentioned it was given old data about some military HQ. The old saying about software applies to AI too. Potatoes in, potatoes out... In this case it was tragically potatoes in, dead children out... And that's why these systems should not be used for war, nor for anything else critical.
@jacqueline If some human watched this he/she would probably check the intel and see maybe that the information for the target "was not very recent" and maybe double check and seek other sources.
-
@jacqueline Not advocating for any of them, but Anthropic famously refused to work with the Department of War. I think you were thinking of OpenAI
@vixalientoots Anthropic were more than happy to work with the Department of War Crimes, they just didn't think that the models were ready to be used in fully autonomous weapons... *yet*.
-
@jacqueline imh this is the same as saying "You are using the same pen as they did sign the Nuremberg laws with". People are accountable for these atrocities, AIs are just another (problematic) tool enabling bad people to do bad things more efficiently. Don't forget who is truly responsible here.
@vyllenjamnin @jacqueline they really aren't "just tools". This analogy is just wrong and extremely misleading.
They are services provided by an organization that have politics embedded into them.
A pen doesn't influence WHAT you're writing. An LLM's training process, which is controlled and managed by people with certain politics, very much influences what it's output will be.
-
u programmed this with claude? the ai platform for bombing schools?
@jacqueline it is not even Claude. Palantir does the job. Now it will probably use OpenAI’s tools
-
@jacqueline it is not even Claude. Palantir does the job. Now it will probably use OpenAI’s tools
@jacqueline What baffles me is how the Panopticon that combining cell tower information and all the info our mobiles phones give out (BLE+wifi triangulation and app data) knows exactly where we are with centimetre accuracy, can be so wrong…
-
J jwcph@helvede.net shared this topic
-
@vyllenjamnin @jacqueline they really aren't "just tools". This analogy is just wrong and extremely misleading.
They are services provided by an organization that have politics embedded into them.
A pen doesn't influence WHAT you're writing. An LLM's training process, which is controlled and managed by people with certain politics, very much influences what it's output will be.
RE: https://mastodon.social/@glyph/116220202738664759
@aesthr @vyllenjamnin @jacqueline Exactly right - see also this thread by @glyph

-
@aparrish you're really missing the point here
-
u programmed this with claude? the ai platform for bombing schools?
@jacqueline For anyone questioning if Claude was used for selecting targets during the US attack on Iran, including bombing a school, here are some sources showing that Claude was used (and is still in use):
https://www.cbsnews.com/news/anthropic-claude-ai-iran-war-u-s/
https://www.theguardian.com/technology/2026/mar/01/claude-anthropic-iran-strikes-us-military -
@vyllenjamnin @jacqueline they really aren't "just tools". This analogy is just wrong and extremely misleading.
They are services provided by an organization that have politics embedded into them.
A pen doesn't influence WHAT you're writing. An LLM's training process, which is controlled and managed by people with certain politics, very much influences what it's output will be.
@aesthr @jacqueline you are right! It is not just a tool, it is strongly influenced by corporate interests and built on hugely stolen work. Maybe I made my point badly. We should not forget that people are still responsible! Let's not shift focus away from whomever eventually signed off on this decision.
-
@aparrish my point is that people can use any pen to write any words they want.
No pen (not even that hypothetical one formerly used by Nazis) will prevent you from writing a certain sequence of words, or from writing about a certain topic. No pen is going to stop putting ink on the paper when your words conflict with some corporate content guideline, or if you write something illegal. No pen is going to write words that you didn't decide to write.
Generative "AI" does all of those things.
-
u programmed this with claude? the ai platform for bombing schools?
@jacqueline @afewbugs thanks im gonna start saying this.
-
u programmed this with claude? the ai platform for bombing schools?
boost with CN: LLMs, war crimes
-
@aparrish you’re still missing the point here about tools vs services, about means of production. and I get the feeling you’re just trying to be contrarian for its own sake
-
@aparrish i never said that tools in general don’t have politics embedded in them. Yet you went on an unsolicited lecture about it instead of engaging with what I was originally talking about.
It’s arrogant and condescending. Now leave me the fuck alone
-
Why assume the human giving the order or the intelligence agency running the AI to select targets *didn't* want to bomb a school?
Obviously it's still a war crime even if they say it was based on faulty or outdated intel. Anyone can *say* that. Equally obviously, a calculation has been made about whether you get away with war crimes these days.
I hate that it's so, but the simplest and most likely explanation for atrocity is always intent.