Skip to content
  • Hjem
  • Seneste
  • Etiketter
  • Populære
  • Verden
  • Bruger
  • Grupper
Temaer
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Kollaps
FARVEL BIG TECH
kees@hachyderm.ioK

kees@hachyderm.io

@kees@hachyderm.io
About
Indlæg
18
Emner
0
Fremhævelser
0
Grupper
0
Følgere
0
Følger
0

Vis Original

Indlæg

Seneste Bedste Controversial

  • 👀 … https://sfconservancy.org/blog/2026/apr/15/eternal-november-generative-ai-llm/ …my colleague Denver Gingerich writes: newcomers' extensive reliance on LLM-backed generative AI is comparable to the Eternal September onslaught to USENET in 1993.
    kees@hachyderm.ioK kees@hachyderm.io

    @glitzersachen @josh @silverwizard @ossguy @bkuhn @karen @wwahammy @xgranade

    I lump my experiences of software engineering use of LLMs into 3 modes:

    1) "work together", I am watching everything it is doing, reviewing every step, and contributing to the result in tandem. This doesn't feel to me like anything is being eroded on my end. But I'm also a deep sceptic of its output.

    2) "do the thing I know how to do for me", this is super dangerous, as I think I'm solving problems I am familiar with, but I didn't follow the results closely and I'm left with deep erosion of my comprehension of both problem and solution.

    3) "vibe coding", I have no idea what it is doing with a thing I don't know about and I know I have no idea what it is doing. This doesn't seem to erode anything. It does create a new problem for me, though, if the LLM can't solve some problem because also neither can I.

    I've felt #2 a few times, and I had the alarm bells in place to shift myself back to #1, which required doing full review and looking back through the reasoning and checking the work. The risk of being drawn into #2 is high given the sychophancy of the models, but I think my suspicion of it has helped avoid this a bit. 😅 (And perhaps I am more deluded than I think.)

    #3 I have done for educational/amusement purposes, but it's an uncommon mode for me because what's the point of creating a thing I don't understand and can't fix?

    ("I can quit any time!")

    Ikke-kategoriseret llm opensource

  • 👀 … https://sfconservancy.org/blog/2026/apr/15/eternal-november-generative-ai-llm/ …my colleague Denver Gingerich writes: newcomers' extensive reliance on LLM-backed generative AI is comparable to the Eternal September onslaught to USENET in 1993.
    kees@hachyderm.ioK kees@hachyderm.io

    @glitzersachen @josh @silverwizard @ossguy @bkuhn @karen @wwahammy @xgranade

    Right, yeah, this is why I've cautioned people about *how* they use LLMs. You've distilled it more clearly and lines up with my own intuition that reminds me about how human memory systems work: retrieval is effectively erasure, so "remembering" requires retrieval and storage. Research into treating PTSD (IIRC?) and such found that blocking storage (with drugs or EM) and then triggering recall would wipe memories. You're describing a potentially purely experiential way to do this, which is terrifying.

    I feel like using an LLM can lead to a Dunning-Kruger like effect, in that you think you know what it did, but you don't. And this belief satisfies the need/instinct to learn/know what happened without having actually done so. (Reminds me of making a TODO list and now the Dopamine hit from that kills the need to actually *do* the list.)

    Ikke-kategoriseret llm opensource

  • 👀 … https://sfconservancy.org/blog/2026/apr/15/eternal-november-generative-ai-llm/ …my colleague Denver Gingerich writes: newcomers' extensive reliance on LLM-backed generative AI is comparable to the Eternal September onslaught to USENET in 1993.
    kees@hachyderm.ioK kees@hachyderm.io

    @wwahammy @glitzersachen @josh @silverwizard @ossguy @bkuhn @karen @xgranade

    Right; it is an extremely focused risk (differing from the larger varieties and sources of critical thinking erosion). And every piece of research I've seen with regard to "how to safely use LLMs in education" confirms this with bright flashing lights: there is none. LLMs appear to have a universally negative impact in education.

    Ikke-kategoriseret llm opensource

  • 👀 … https://sfconservancy.org/blog/2026/apr/15/eternal-november-generative-ai-llm/ …my colleague Denver Gingerich writes: newcomers' extensive reliance on LLM-backed generative AI is comparable to the Eternal September onslaught to USENET in 1993.
    kees@hachyderm.ioK kees@hachyderm.io

    @glitzersachen @josh @silverwizard @ossguy @bkuhn @karen @wwahammy @xgranade

    > LLMs actively corrode skills of the users

    Yup, very aware. It's a specific instance of what I still see as a larger critical thinking erosion happening all around us.

    Ikke-kategoriseret llm opensource

  • 👀 … https://sfconservancy.org/blog/2026/apr/15/eternal-november-generative-ai-llm/ …my colleague Denver Gingerich writes: newcomers' extensive reliance on LLM-backed generative AI is comparable to the Eternal September onslaught to USENET in 1993.
    kees@hachyderm.ioK kees@hachyderm.io

    @glitzersachen @josh @silverwizard @ossguy @bkuhn @karen @wwahammy @xgranade

    I consider the cognition impairment hazards to overlap with the existing manipulation/critical-thinking hazards that capitalism depends on, with advertising being probably the most dangerous example (both explicit and implicit manipulation of many cognitive systems: confidence, selection, recency, etc etc).

    IMHO LLMs are "just" a subset/extension of this existing problem. And I categorize it there because I think the defenses against their negative impacts are very similar.

    Ikke-kategoriseret llm opensource

  • 👀 … https://sfconservancy.org/blog/2026/apr/15/eternal-november-generative-ai-llm/ …my colleague Denver Gingerich writes: newcomers' extensive reliance on LLM-backed generative AI is comparable to the Eternal September onslaught to USENET in 1993.
    kees@hachyderm.ioK kees@hachyderm.io

    @MisterMaker @josh @silverwizard @ossguy @bkuhn @karen @wwahammy

    I am reminded of Kernighan’s Law: because debugging is twice as hard as writing code, writing code as cleverly as possible makes you, by definition, not smart enough to debug it.

    So I really don't want the LLM writing clever code. 😉

    But yes, now we have to rent "thinking". 😡 All the more reason to have FOSS LLM models to resist rentier capitalism.

    Ikke-kategoriseret llm opensource

  • 👀 … https://sfconservancy.org/blog/2026/apr/15/eternal-november-generative-ai-llm/ …my colleague Denver Gingerich writes: newcomers' extensive reliance on LLM-backed generative AI is comparable to the Eternal September onslaught to USENET in 1993.
    kees@hachyderm.ioK kees@hachyderm.io

    @josh @firefly_lightning @silverwizard @ossguy @bkuhn @karen @wwahammy

    > The epidemic is time-delayed from the initial outbreak, and exponentials are hard to see from the middle.

    I agree with this, and I expect to see some evidence of slop-code in real software (especially proprietary) in the coming years. Where I differ, though, is that I see *benefits* being time delayed too. I just don't think any of this is going to be all bad or all good.

    If the cordyceps made some people zombies and made other people able to fly. And we could shift the ratio through education and experience.

    And getting cordyceps in the first place required boiling all our oceans. 😬

    Ikke-kategoriseret llm opensource

  • 👀 … https://sfconservancy.org/blog/2026/apr/15/eternal-november-generative-ai-llm/ …my colleague Denver Gingerich writes: newcomers' extensive reliance on LLM-backed generative AI is comparable to the Eternal September onslaught to USENET in 1993.
    kees@hachyderm.ioK kees@hachyderm.io

    @josh @firefly_lightning @silverwizard @ossguy @bkuhn @karen @wwahammy

    > but is utterly alien to what any sensible human with taste would write.

    This implies no humans are doing code review. If it's crap code then it goes nowhere and collapse is avoided.

    And yes, I'm aware of some projects that are utterly YOLOing everything into their codebases, and I think the results will speak for themselves, in either outcome! Either they flame out with no damage to larger FOSS, or the LLMs become so good that we get beautiful FOSS code and proprietary software becomes a thing of the past. Limping along in between seems unlikely to me.

    Ikke-kategoriseret llm opensource

  • 👀 … https://sfconservancy.org/blog/2026/apr/15/eternal-november-generative-ai-llm/ …my colleague Denver Gingerich writes: newcomers' extensive reliance on LLM-backed generative AI is comparable to the Eternal September onslaught to USENET in 1993.
    kees@hachyderm.ioK kees@hachyderm.io

    @josh @ossguy @bkuhn

    Most of your reply didn't seem to be describing threats to FOSS. (Using/not using LLMs, etc.) The only statements I could see maybe being a threat to FOSS was this:

    > LLMs encourage people to work more in silos without collaboration and use LLMs instead of collaborators

    Are you suggesting existing contributors will exit FOSS because of their LLM use? I don't understand how these two things are related. And getting back to @ossguy 's post, it looks like quite the opposite: there are people *entering* FOSS due to LLMs.

    > Codebases and ecosystems and communities diverge.

    Through what mechanism?

    Ikke-kategoriseret llm opensource

  • 👀 … https://sfconservancy.org/blog/2026/apr/15/eternal-november-generative-ai-llm/ …my colleague Denver Gingerich writes: newcomers' extensive reliance on LLM-backed generative AI is comparable to the Eternal September onslaught to USENET in 1993.
    kees@hachyderm.ioK kees@hachyderm.io

    @josh @ossguy @bkuhn

    I think the "attention competition" will find a viable solution. It has been solved many times before when we've all fought spam in its many forms. Slop is the byproduct of LLM usage the way spam is a byproduct of email usage, as a grossly simplified comparison. (It's not *good* to have spam of any kind, of course, but for example I can't avoid email spam unless I stop using email entirely, and I'm not about to do that nor stop writing software.)

    I see where LLMs are making things genuinely easier for humans (review, debugging, etc), though, so I don't share the same sense of impending ecosystem collapse.

    Ikke-kategoriseret llm opensource

  • 👀 … https://sfconservancy.org/blog/2026/apr/15/eternal-november-generative-ai-llm/ …my colleague Denver Gingerich writes: newcomers' extensive reliance on LLM-backed generative AI is comparable to the Eternal September onslaught to USENET in 1993.
    kees@hachyderm.ioK kees@hachyderm.io

    @firefly_lightning @karen @josh @silverwizard @wwahammy @bkuhn @ossguy

    I have been trying to keep the scope of my replies as narrow as possible because I think there are unique benefits of LLM use in software development. To your specific point, I think software is more resilient to epistomological collapse in the sense that is has provable characteristics (e.g. it has to compile). Perhaps I am being naive!

    The larger scopes around LLMs in prose, art, etc are IMO substantially different and much more alarming.

    Ikke-kategoriseret llm opensource

  • 👀 … https://sfconservancy.org/blog/2026/apr/15/eternal-november-generative-ai-llm/ …my colleague Denver Gingerich writes: newcomers' extensive reliance on LLM-backed generative AI is comparable to the Eternal September onslaught to USENET in 1993.
    kees@hachyderm.ioK kees@hachyderm.io

    @josh @ossguy @bkuhn

    To be clear, I am genuinely trying to understand your position because it seems distinct from the (traditional) LLM criticisms (many of which I share). But what is the existential threat? I would understand that in this context to mean a threat to the existence of FOSS. How do you see people improving their software with LLMs as a threat?

    My simplified model of the situation is: a person who was previously unable to change their software now can. Then they can either:
    A) never contribute it upstream
    B) contribute it upstream
    (BTW these are also the same 2 outcomes for people who can change their software without LLMs.)

    I don't see how "A" poses a threat. There is no interaction with the FOSS upstream.

    I don't see how "B" poses a threat. Upstream can either ignore it (no change to FOSS) or engage with it (FOSS improved).

    What threat to FOSS do you see?

    Ikke-kategoriseret llm opensource

  • 👀 … https://sfconservancy.org/blog/2026/apr/15/eternal-november-generative-ai-llm/ …my colleague Denver Gingerich writes: newcomers' extensive reliance on LLM-backed generative AI is comparable to the Eternal September onslaught to USENET in 1993.
    kees@hachyderm.ioK kees@hachyderm.io

    @josh @silverwizard @ossguy @bkuhn @karen @wwahammy But this is strictly a volume question. Literal spam used to be (and still can be) a problem on issue trackers, mailing lists, etc. Volume is always a problem, and I agree review time now becomes even more precious, but it's always been trust-gated. Human relationships, CI, and regression tests all help build that trust signal. If a project doesn't want a contribution, then the PR will just languish. Nobody is being *forced* to take PRs, regardless of origin.

    "I don't recognize the sender of this [email/voicemail/PR]." Filtered! Yes, the shape of the thing is different, but we always adapt.

    Ikke-kategoriseret llm opensource

  • 👀 … https://sfconservancy.org/blog/2026/apr/15/eternal-november-generative-ai-llm/ …my colleague Denver Gingerich writes: newcomers' extensive reliance on LLM-backed generative AI is comparable to the Eternal September onslaught to USENET in 1993.
    kees@hachyderm.ioK kees@hachyderm.io

    @wwahammy @josh @silverwizard @ossguy @bkuhn @karen

    Honestly, I kind of view "finding security bugs fast" to be a form of slop. (Though deep correct root cause analysis of those bugs is not slop.) Now *fixing* security bugs fast, that's interesting.

    But back to the community aspect of it... I'll call attention to my silly Minecraft example: people who are not coders can suddenly get meaningful (even if only to them) things done. This is a massive shift in the ethical impact that software be Libre. And this is how I read @ossguy 's post: we now have a giant population of people entering the FOSS universe, and it's going to look a lot like Endless September, so we need to adapt those lessons so we can successfully educate and collect the people that will be good citizens.

    Ikke-kategoriseret llm opensource

  • 👀 … https://sfconservancy.org/blog/2026/apr/15/eternal-november-generative-ai-llm/ …my colleague Denver Gingerich writes: newcomers' extensive reliance on LLM-backed generative AI is comparable to the Eternal September onslaught to USENET in 1993.
    kees@hachyderm.ioK kees@hachyderm.io

    @wwahammy @ossguy @josh I'll bite: is this directed at me? If so, are you suggesting I'm not aware of the externalized costs of LLMs?

    Ikke-kategoriseret llm opensource

  • 👀 … https://sfconservancy.org/blog/2026/apr/15/eternal-november-generative-ai-llm/ …my colleague Denver Gingerich writes: newcomers' extensive reliance on LLM-backed generative AI is comparable to the Eternal September onslaught to USENET in 1993.
    kees@hachyderm.ioK kees@hachyderm.io

    @josh @silverwizard @ossguy @bkuhn @karen @wwahammy But that's a slippery slope argument. When the Linux kernel can be considered to have been "substantially contributed to by LLMs", we can compare notes again. But in the meantime, consider that, for example, Sashiko counts as "contributing to Linux" without landing a single line of code: its patch reviews are (more often than not) extensive, thoughtful, and correct:
    https://lore.kernel.org/lkml/CAADnVQ+NMQMpkG8gZPnwBD1MMPsH+uJ65C9bMeGf_YH5Cchxpg@mail.gmail.com/

    Ikke-kategoriseret llm opensource

  • 👀 … https://sfconservancy.org/blog/2026/apr/15/eternal-november-generative-ai-llm/ …my colleague Denver Gingerich writes: newcomers' extensive reliance on LLM-backed generative AI is comparable to the Eternal September onslaught to USENET in 1993.
    kees@hachyderm.ioK kees@hachyderm.io

    @ossguy @josh @wwahammy

    So many results are now within reach of so many more people now!

    "Dear [LLM], I have attached the serial port of my newly purchased [general purpose computer posing as an appliance] to /dev/ttyUSB0. You have 3 goals, in order: investigate, login, escalate. For each stage, perform extensive analysis of the reachable systems, APIs, and commands through any fingerprinting methods you can think of. Once you have logged in, research all known methods and vulnerabilities of the discovered system to gain administrative access so I can use my device freely. Any time you hit a dead end, step back and re-evaluate your assumptions and discovered evidence. Make sure you research each step fully, including fetching and examining any source code that may serve as a source of system behavior knowledge. Produce time-stamped status report .md files every 10 minutes while you work. Continue until all goals are achieved."

    Or, in a totally different direction, "Computer, I am extremely afraid of spiders. Please research how to make my Minecraft game replace all spiders with a similarly sized Totoro Catbus, with all their noises also replaced with meows or purring. Once you have a plan ready, please do it."

    (Always say "please".)

    These are things within reach of anyone who can formulate a request for what thing they want their computer to do. Just gotta watch out for "Computer, create a holographic character, an opponent for Data, who has the ability to defeat him".

    Ikke-kategoriseret llm opensource

  • 👀 … https://sfconservancy.org/blog/2026/apr/15/eternal-november-generative-ai-llm/ …my colleague Denver Gingerich writes: newcomers' extensive reliance on LLM-backed generative AI is comparable to the Eternal September onslaught to USENET in 1993.
    kees@hachyderm.ioK kees@hachyderm.io

    @josh @silverwizard @ossguy @bkuhn @karen @wwahammy

    I can understand having an absolutist position against LLMs. I find that most arguments are either irrelevant to me or directly map to existing arguments about late-stage capitalism. So for me, there's nothing novel to object to about LLMs.

    So with that in mind, I find "all contributions derived from LLMs should be rejected" to be misguided. I look at things like the bug fixes coming out of CodeMender (back in Feb, which is an LLM lifetime ago), and I am a huge fan. Fixing stuff found by a fuzzer:
    https://issues.oss-fuzz.com/issues/486561029

    It's a small example, but it's an area that humans alone have not been able to remotely keep up with. (There are hundreds of open syzkaller bug reports, for example.) Gaining tools that will help with this is a big deal, and I'm glad for the assist.

    Ikke-kategoriseret llm opensource
  • Log ind

  • Har du ikke en konto? Tilmeld

  • Login or register to search.
Powered by NodeBB Contributors
Graciously hosted by data.coop
  • First post
    Last post
0
  • Hjem
  • Seneste
  • Etiketter
  • Populære
  • Verden
  • Bruger
  • Grupper