I was disappointed to read Cory Doctorow's post where he got weirdly defensive about his LLM use and started arguing with an imaginary foe.
-
I was disappointed to read Cory Doctorow's post where he got weirdly defensive about his LLM use and started arguing with an imaginary foe.
@tante has a very thoughtful reply here:
https://tante.cc/2026/02/20/acting-ethical-in-an-imperfect-world/
A few further comments, 🧵>>It was particularly disappointing to see Doctorow misconstrue (and thus, if he is believed) undermine the work that many of us are doing to shine a light on the ways in which the ideology of "AI" and the specific ways in which LLMs and other "AI" products are created do real harm.
>> -
It was particularly disappointing to see Doctorow misconstrue (and thus, if he is believed) undermine the work that many of us are doing to shine a light on the ways in which the ideology of "AI" and the specific ways in which LLMs and other "AI" products are created do real harm.
>>In this context, I feel like reminding people (again) that the stochastic parrots paper was not primarily a response to synthetic text extruding machines (not at all popular in late 2020), but an exploration of the range harms that had already been documented in the pursuit of LM scale.
https://dl.acm.org/doi/10.1145/3442188.3445922
>>
-
It was particularly disappointing to see Doctorow misconstrue (and thus, if he is believed) undermine the work that many of us are doing to shine a light on the ways in which the ideology of "AI" and the specific ways in which LLMs and other "AI" products are created do real harm.
>>I also want to point out (again) the ways in which lumping together all uses of LMs (like the lumping of technologies into "AI") obscures the issues at hand.
Language modeling is a useful component of many technologies that can be built without extractive, exploitative means. Take the automatic transcription built by and for the Māori people -- there's te reo Māori language model that's part of that.
>> -
I also want to point out (again) the ways in which lumping together all uses of LMs (like the lumping of technologies into "AI") obscures the issues at hand.
Language modeling is a useful component of many technologies that can be built without extractive, exploitative means. Take the automatic transcription built by and for the Māori people -- there's te reo Māori language model that's part of that.
>>And the transformer architecture represented an important step forward in language modeling, that brought improvements to things like spell checking (Doctorow's use case).
>> -
And the transformer architecture represented an important step forward in language modeling, that brought improvements to things like spell checking (Doctorow's use case).
>>What we argued in Stochastic Parrots, however, was that you can get those benefits of the transformer architecture without ammassing datasets too large to collect with care (meaning consentfully, intentionally, and with the ability to document what's in the data).
>> -
And the transformer architecture represented an important step forward in language modeling, that brought improvements to things like spell checking (Doctorow's use case).
>>And you can build and use language models without turning them into the synthetic text extruding machines that are despoiling our information ecosystem.
And even if those are easily accessible, because OpenAI et al want to burn through cash with their demos, we can still refute and refuse the narrative that synthetic text is somehow a panacea to be used across social services (medicine, education) and in science, etc.
>> -
And you can build and use language models without turning them into the synthetic text extruding machines that are despoiling our information ecosystem.
And even if those are easily accessible, because OpenAI et al want to burn through cash with their demos, we can still refute and refuse the narrative that synthetic text is somehow a panacea to be used across social services (medicine, education) and in science, etc.
>>Doctorow could have gone into these details; could have said something about the particular LLM he chose was built (whose data, trained how, how much data, what kind of further data work in RLHF); could have drawn a distinction about use cases.
>> -
Doctorow could have gone into these details; could have said something about the particular LLM he chose was built (whose data, trained how, how much data, what kind of further data work in RLHF); could have drawn a distinction about use cases.
>>But instead he wrote a defensive screed, seemingly imagining someone knowing about his LLM use ascribing to him all of the ills of everyone's LLM production and use.
A missed opportunity, to be sure.
-
I was disappointed to read Cory Doctorow's post where he got weirdly defensive about his LLM use and started arguing with an imaginary foe.
@tante has a very thoughtful reply here:
https://tante.cc/2026/02/20/acting-ethical-in-an-imperfect-world/
A few further comments, 🧵>>@tante small typos if useful to know:
"And that stand lead him into the problematic train of thought" (led)
"Of just reap the fruits..." (or)And thank you for this piece!!
-
I was disappointed to read Cory Doctorow's post where he got weirdly defensive about his LLM use and started arguing with an imaginary foe.
@tante has a very thoughtful reply here:
https://tante.cc/2026/02/20/acting-ethical-in-an-imperfect-world/
A few further comments, 🧵>>@emilymbender Thank you for this. I read the piece in Smashing Frames yesterday – very thoughtful, as is your response.
-
I was disappointed to read Cory Doctorow's post where he got weirdly defensive about his LLM use and started arguing with an imaginary foe.
@tante has a very thoughtful reply here:
https://tante.cc/2026/02/20/acting-ethical-in-an-imperfect-world/
A few further comments, 🧵>>@emilymbender @tante
FYI, there was also a follow-up that I found equally worth reading:
https://tante.cc/2026/02/20/on-alliances/ -
I was disappointed to read Cory Doctorow's post where he got weirdly defensive about his LLM use and started arguing with an imaginary foe.
@tante has a very thoughtful reply here:
https://tante.cc/2026/02/20/acting-ethical-in-an-imperfect-world/
A few further comments, 🧵>>@emilymbender @tante Enshittification Man himself getting enshittified? Is nothing in this world sacred? -
I was disappointed to read Cory Doctorow's post where he got weirdly defensive about his LLM use and started arguing with an imaginary foe.
@tante has a very thoughtful reply here:
https://tante.cc/2026/02/20/acting-ethical-in-an-imperfect-world/
A few further comments, 🧵>>@emilymbender @tante Thank you so much. Yours is the ethical reading of LLMs that is needed. (1) Moral beliefs lead to choices that are not black-and-white. (2) Tools are not immoral because of their creators. (3) The decision to use knowledge immorally obtained should weigh heavily on the user. (4) Tools whose use produces immoral outcomes should also weigh heavily on the user.
It's unfortunate that Doctorow went all-in on logical fallacies and presumed absolutes in order to defend his use.
-
I was disappointed to read Cory Doctorow's post where he got weirdly defensive about his LLM use and started arguing with an imaginary foe.
@tante has a very thoughtful reply here:
https://tante.cc/2026/02/20/acting-ethical-in-an-imperfect-world/
A few further comments, 🧵>>Ive found some solace in combining Kant's universal categorical imperative with ethical hedonism.
-
J jwcph@helvede.net shared this topic