👀 … https://sfconservancy.org/blog/2026/apr/15/eternal-november-generative-ai-llm/ …my colleague Denver Gingerich writes: newcomers' extensive reliance on LLM-backed generative AI is comparable to the Eternal September onslaught to USENET in 1993.
-
@kees @glitzersachen @josh @silverwizard @ossguy @bkuhn @karen @xgranade I think you are wildly underestimating the cognitive hazards. Like I hesitate to even say "wildly underestimating" because that phrase is not strong enough.
@wwahammy @kees IMO you're both right.
LLM-backed gen. AI is a dangerous tool w/ potential to not only atrophy the skillsets of experienced developers *but also* lead newcomers to *never develop those skills*.
Our charge is to create policies that encourage extremely disciplined use of these systems.
I support decriminalization of recreational substances. But, such has to come with major funding for addiction support. IMO the analogy is apt.
@glitzersachen @josh @silverwizard @ossguy @xgranade -
@kees @josh @silverwizard @ossguy @bkuhn @karen @wwahammy @xgranade
> to overlap with the existing manipulation/critical-thinking hazards that capitalism
I think it's more, not only the manipulation part. LLMs actively corrode skills of the users. Not by by not using them. No, actually worse.
I hope you have heard about this possibility (whether you believe in it or not).
@glitzersachen @josh @silverwizard @ossguy @bkuhn @karen @wwahammy @xgranade
> LLMs actively corrode skills of the users
Yup, very aware. It's a specific instance of what I still see as a larger critical thinking erosion happening all around us.
-
-
@glitzersachen @josh @silverwizard @ossguy @bkuhn @karen @wwahammy @xgranade
> LLMs actively corrode skills of the users
Yup, very aware. It's a specific instance of what I still see as a larger critical thinking erosion happening all around us.
@kees @josh @silverwizard @ossguy @bkuhn @karen @wwahammy @xgranade
I suspect (as some scientists do as well) that this is not a cultural, but a neurological phenomenon. So I think this is really on a totally different level.
The skill erosion I am talking about has nothing to do AT ALL with critical thinking.
Best case it's just that if of two synaptic circuits (use the translation tool vs retrieve from memory) the one which wanted to activate, but then got not chosen, is actively deleted or weakened. My understanding is that this is how biological brains work / learn.
The worse alternative is that the output of LLMs has some yet not sufficiently described hidden quality which poisons neuronal networks that process them.
One hint in this direction is, that LLM models, who consume the production of other LLM models in training, collapse. That's a clear indicator that on some level LLM output is observably different from human language production, though we as humans have on average a hard time telling the difference.
And this is what I am talking about: Not a loss of cultural techniques or of learned skill by atrophy or not being taught anymore, but the poisoning of neuronal networks by input they cannot firewall because they have not evolved to recognize it as a hazard.
-
@wwahammy @kees IMO you're both right.
LLM-backed gen. AI is a dangerous tool w/ potential to not only atrophy the skillsets of experienced developers *but also* lead newcomers to *never develop those skills*.
Our charge is to create policies that encourage extremely disciplined use of these systems.
I support decriminalization of recreational substances. But, such has to come with major funding for addiction support. IMO the analogy is apt.
@glitzersachen @josh @silverwizard @ossguy @xgranade@bkuhn @wwahammy @kees @josh @silverwizard @ossguy @xgranade
Let me point you to my reply here => https://hachyderm.io/@glitzersachen/116421481982246037.
I really think the issue at the core *might* not be loosing skills by neglecting to exercise them, but rather poisoning of neural networks. Brainwashing them into (skill) oblivion.
The comparison to hard drugs would be apt, if this is true.
And our employers want us to ruin our skills and our brains. They obviously don't believe in a common future with their knowledge workers anymore...
-
@kees @glitzersachen @josh @silverwizard @ossguy @bkuhn @karen @xgranade I think you are wildly underestimating the cognitive hazards. Like I hesitate to even say "wildly underestimating" because that phrase is not strong enough.
@wwahammy @glitzersachen @josh @silverwizard @ossguy @bkuhn @karen @xgranade
Right; it is an extremely focused risk (differing from the larger varieties and sources of critical thinking erosion). And every piece of research I've seen with regard to "how to safely use LLMs in education" confirms this with bright flashing lights: there is none. LLMs appear to have a universally negative impact in education.
-
@wwahammy @kees IMO you're both right.
LLM-backed gen. AI is a dangerous tool w/ potential to not only atrophy the skillsets of experienced developers *but also* lead newcomers to *never develop those skills*.
Our charge is to create policies that encourage extremely disciplined use of these systems.
I support decriminalization of recreational substances. But, such has to come with major funding for addiction support. IMO the analogy is apt.
@glitzersachen @josh @silverwizard @ossguy @xgranade@bkuhn @kees @glitzersachen @josh @silverwizard @ossguy @xgranade
This is not a remotely accurate analogy. The level of rage in this country over AI is uncontrollable and it's accelerating. Two people tried to kill Sam Altman in the last week. An Indiana planning official's house was shot after they approved a new data center.
In the political realm, the shift is unimaginably swift. Ex: 6 months ago, no Democrat for WI governor had a policy on data centers because building unions wanted them. Now every one of them is fighting over how strict their ban on data centers is.
The best analogy I think of is the opioid crisis. When people were ready to kill the Sacklers and everyone at Purdue Pharma, you can't come in and say anything that people think you are tolerant of the damage. You can't even argue "we can punish these people but we have to protect access to opioids". Everyone KNOWS there are uses but you can't build a policy around that because the public doesn't care. At all.
The only time you can have this discussion was years ago or years in the future after the public has taken their pound of flesh. Right now, it's an immensely dangerous idea for SFC.
-
Talking with them is good. Helping to educate them is good. Making it sound as if what they are doing is okay is *not*.
There is a big difference between offering an olive branch to people who *might* be productive contributors in the *future*, and telling them that what they're doing *now* is okay.
The best AI policy remains "do not contribute any LLM-written content, ever". You have published a post that makes it easier for people who oppose such policies to cite your "olive branch" when arguing against it, and it is not obvious from your post that you do not want that to happen.
I don't want to see people *abused* for using LLMs. I do want them to understand that what they're doing is not okay and not welcome and not a positive contribution.@josh @silverwizard @ossguy @bkuhn @karen @wwahammy
As far as I am concerned people should be "abused" for shilling AI from a position where they really don't have any sufficient insight. Like middle management trying to push AI on reluctant software engineers with all the tricks in the book (for example tying performance review results to AI use). This behavior destroys trust and workplace culture. What do they think? That the engineers don't understand their own work mode? The hubris of management: "I'll tell you how you can work better. I know better how you can work better."
And this behavior needs to be called out.
-
@wwahammy @silverwizard @firefly_lightning @cwebber @ossguy yeah, "great question! come over to crime scene 2 for an answer perhaps!" has never been a good look.
it was presented as human written text. The human who signs their name to it should be able to answer text-based questions about it in written form.
@davidgerard @wwahammy @silverwizard @firefly_lightning @cwebber Yes, which is why it's important to allow people to identify when they have used LLM/AI assistants to help. New contributors will see this is the norm, and then it will be easier to help them, because we'll know a bit about where any potential knowledge gaps might be coming from.
If we "ban" LLM/AI-assisted contributions, people will use them anyway but hide their use, which is a trickier problem to solve.
-
@josh @silverwizard @ossguy @bkuhn @karen @wwahammy
As far as I am concerned people should be "abused" for shilling AI from a position where they really don't have any sufficient insight. Like middle management trying to push AI on reluctant software engineers with all the tricks in the book (for example tying performance review results to AI use). This behavior destroys trust and workplace culture. What do they think? That the engineers don't understand their own work mode? The hubris of management: "I'll tell you how you can work better. I know better how you can work better."
And this behavior needs to be called out.
People shouldn't be *abused*, ever. If people are *shilling* AI and trying to force it on others, they might deserve some amount of shame and disapprobation. But nobody deserves abuse. -
(2/5) … In https://sfconservancy.org/blog/2026/apr/15/eternal-november-generative-ai-llm/ ,
Denver's key points are: we *have* to (a) be open to *listening* to people who want to contribute #FOSS with #LLM-backed generative #AI systems, & (b) work collaboratively on a *plan* of how we can solve the current crisis.Nothing ever got done politically that was good when both sides become more entrenched, refuse to even concede the other side has some valid points, & each say the other is the Enemy. …
@bkuhn For those who are acting in good faith, and willing to contribute in healthy ways - yes, it's absolutely worth while to talk to them, and try to get them to contribute in good ways. If you can, have time, are not drowning in so much slop that you can't tell who means well and who just needs to be blocked/banned, etc. - integrating people into the community is a lot of work, and a lot of people maintaining and making free software are already doing a lot of work for free as it is. (1/?)
-
@kees @josh @silverwizard @ossguy @bkuhn @karen @wwahammy @xgranade
I suspect (as some scientists do as well) that this is not a cultural, but a neurological phenomenon. So I think this is really on a totally different level.
The skill erosion I am talking about has nothing to do AT ALL with critical thinking.
Best case it's just that if of two synaptic circuits (use the translation tool vs retrieve from memory) the one which wanted to activate, but then got not chosen, is actively deleted or weakened. My understanding is that this is how biological brains work / learn.
The worse alternative is that the output of LLMs has some yet not sufficiently described hidden quality which poisons neuronal networks that process them.
One hint in this direction is, that LLM models, who consume the production of other LLM models in training, collapse. That's a clear indicator that on some level LLM output is observably different from human language production, though we as humans have on average a hard time telling the difference.
And this is what I am talking about: Not a loss of cultural techniques or of learned skill by atrophy or not being taught anymore, but the poisoning of neuronal networks by input they cannot firewall because they have not evolved to recognize it as a hazard.
@glitzersachen @josh @silverwizard @ossguy @bkuhn @karen @wwahammy @xgranade
Right, yeah, this is why I've cautioned people about *how* they use LLMs. You've distilled it more clearly and lines up with my own intuition that reminds me about how human memory systems work: retrieval is effectively erasure, so "remembering" requires retrieval and storage. Research into treating PTSD (IIRC?) and such found that blocking storage (with drugs or EM) and then triggering recall would wipe memories. You're describing a potentially purely experiential way to do this, which is terrifying.
I feel like using an LLM can lead to a Dunning-Kruger like effect, in that you think you know what it did, but you don't. And this belief satisfies the need/instinct to learn/know what happened without having actually done so. (Reminds me of making a TODO list and now the Dopamine hit from that kills the need to actually *do* the list.)
-
@bkuhn For those who are acting in good faith, and willing to contribute in healthy ways - yes, it's absolutely worth while to talk to them, and try to get them to contribute in good ways. If you can, have time, are not drowning in so much slop that you can't tell who means well and who just needs to be blocked/banned, etc. - integrating people into the community is a lot of work, and a lot of people maintaining and making free software are already doing a lot of work for free as it is. (1/?)
@bkuhn That does not mean that LLM-generated code, assets, or outputs can be allowed in free-software projects. Which is an important distinction.
If someone wants to contribute, that's great - point them to resources of how to do so, how to make submissions that can be accepted. If they won't contribute without using claude or whatever, then their contributions must be refused.
The ethical, environmental, public health, freedom/human rights, issues of LLMs as they exist are too severe (2/?) -
@bkuhn That does not mean that LLM-generated code, assets, or outputs can be allowed in free-software projects. Which is an important distinction.
If someone wants to contribute, that's great - point them to resources of how to do so, how to make submissions that can be accepted. If they won't contribute without using claude or whatever, then their contributions must be refused.
The ethical, environmental, public health, freedom/human rights, issues of LLMs as they exist are too severe (2/?)@bkuhn to be acceptable in free software. I would go so far as say the free software definition should be ammended, to exclude any component generated by an LLM or similar generative program that is not:
- Purely deterministic OR
- Completely free, including all training data, weights, source code, training processes, etc. such that a user (with sufficient resources) could recreate the process from the ground up, in the same way that a user can re-compile GCC, & uses only (3/?) -
@bkuhn to be acceptable in free software. I would go so far as say the free software definition should be ammended, to exclude any component generated by an LLM or similar generative program that is not:
- Purely deterministic OR
- Completely free, including all training data, weights, source code, training processes, etc. such that a user (with sufficient resources) could recreate the process from the ground up, in the same way that a user can re-compile GCC, & uses only (3/?)@bkuhn Ethically-acquired (so, no DoSing an artists website for it, no ignoring a robots.txt to scrape it, etc) training data.
This doesn't solve the *other* ethical problems, but, the nature of these generators is that without freedom in the full chain from data to output as a minimum, they should be excluded from the free software definition - in the same way inserting a binary blob into the linux kernel makes it at least partially non-free. The LLM is a non-free black-box without those(4/?)
-
@bkuhn Ethically-acquired (so, no DoSing an artists website for it, no ignoring a robots.txt to scrape it, etc) training data.
This doesn't solve the *other* ethical problems, but, the nature of these generators is that without freedom in the full chain from data to output as a minimum, they should be excluded from the free software definition - in the same way inserting a binary blob into the linux kernel makes it at least partially non-free. The LLM is a non-free black-box without those(4/?)
@bkuhn parts being as free as the source code, and including their outputs should be treated as a binary blob - since there is no way to investigate the process behind how it ended up there or was created. It can't even be meaningfully reverse-engineered in most cases.
I'd also add to any project refusal of any generated content that carries the climate, freedom, rights, labour rights, etc. concerns as well - but at a minimum, the outputs of an LLM can not be considered free. (5/5)
-
@bkuhn parts being as free as the source code, and including their outputs should be treated as a binary blob - since there is no way to investigate the process behind how it ended up there or was created. It can't even be meaningfully reverse-engineered in most cases.
I'd also add to any project refusal of any generated content that carries the climate, freedom, rights, labour rights, etc. concerns as well - but at a minimum, the outputs of an LLM can not be considered free. (5/5)
@bkuhn The *people* coming in, if they mean well and will contribute in ethical ways, are fine, and worth welcoming in.
But the LLMs themselves - the use of the tool - as it exists undermines the ethical core of the free software movement, and carries too many other ethical problems to be acceptable.
(6/5) -
@bkuhn The *people* coming in, if they mean well and will contribute in ethical ways, are fine, and worth welcoming in.
But the LLMs themselves - the use of the tool - as it exists undermines the ethical core of the free software movement, and carries too many other ethical problems to be acceptable.
(6/5)@bkuhn This leaves room for an ethical, actually free, version of the tech, should it appear at some point, which is a compromise, my instinct is that there can be no ethical version, but, I could be wrong.
As-is though, the LLMs & Genrative systems in use are a black box - even 'open source' ones, if they do not also provide full access to the training data & methodology, including them in free software is no better than proprietary code. The definitions & licenses need to reflect this. (7/5) -
@glitzersachen @josh @silverwizard @ossguy @bkuhn @karen @wwahammy @xgranade
Right, yeah, this is why I've cautioned people about *how* they use LLMs. You've distilled it more clearly and lines up with my own intuition that reminds me about how human memory systems work: retrieval is effectively erasure, so "remembering" requires retrieval and storage. Research into treating PTSD (IIRC?) and such found that blocking storage (with drugs or EM) and then triggering recall would wipe memories. You're describing a potentially purely experiential way to do this, which is terrifying.
I feel like using an LLM can lead to a Dunning-Kruger like effect, in that you think you know what it did, but you don't. And this belief satisfies the need/instinct to learn/know what happened without having actually done so. (Reminds me of making a TODO list and now the Dopamine hit from that kills the need to actually *do* the list.)
@glitzersachen @josh @silverwizard @ossguy @bkuhn @karen @wwahammy @xgranade
I lump my experiences of software engineering use of LLMs into 3 modes:
1) "work together", I am watching everything it is doing, reviewing every step, and contributing to the result in tandem. This doesn't feel to me like anything is being eroded on my end. But I'm also a deep sceptic of its output.
2) "do the thing I know how to do for me", this is super dangerous, as I think I'm solving problems I am familiar with, but I didn't follow the results closely and I'm left with deep erosion of my comprehension of both problem and solution.
3) "vibe coding", I have no idea what it is doing with a thing I don't know about and I know I have no idea what it is doing. This doesn't seem to erode anything. It does create a new problem for me, though, if the LLM can't solve some problem because also neither can I.
I've felt #2 a few times, and I had the alarm bells in place to shift myself back to #1, which required doing full review and looking back through the reasoning and checking the work. The risk of being drawn into #2 is high given the sychophancy of the models, but I think my suspicion of it has helped avoid this a bit.
(And perhaps I am more deluded than I think.)#3 I have done for educational/amusement purposes, but it's an uncommon mode for me because what's the point of creating a thing I don't understand and can't fix?
("I can quit any time!")
-
@glitzersachen @josh @silverwizard @ossguy @bkuhn @karen @wwahammy @xgranade
Right, yeah, this is why I've cautioned people about *how* they use LLMs. You've distilled it more clearly and lines up with my own intuition that reminds me about how human memory systems work: retrieval is effectively erasure, so "remembering" requires retrieval and storage. Research into treating PTSD (IIRC?) and such found that blocking storage (with drugs or EM) and then triggering recall would wipe memories. You're describing a potentially purely experiential way to do this, which is terrifying.
I feel like using an LLM can lead to a Dunning-Kruger like effect, in that you think you know what it did, but you don't. And this belief satisfies the need/instinct to learn/know what happened without having actually done so. (Reminds me of making a TODO list and now the Dopamine hit from that kills the need to actually *do* the list.)