Skip to content
  • Hjem
  • Seneste
  • Etiketter
  • Populære
  • Verden
  • Bruger
  • Grupper
Temaer
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Kollaps
FARVEL BIG TECH
  1. Forside
  2. Ikke-kategoriseret
  3. 👀 … https://sfconservancy.org/blog/2026/apr/15/eternal-november-generative-ai-llm/ …my colleague Denver Gingerich writes: newcomers' extensive reliance on LLM-backed generative AI is comparable to the Eternal September onslaught to USENET in 1993.

👀 … https://sfconservancy.org/blog/2026/apr/15/eternal-november-generative-ai-llm/ …my colleague Denver Gingerich writes: newcomers' extensive reliance on LLM-backed generative AI is comparable to the Eternal September onslaught to USENET in 1993.

Planlagt Fastgjort LĂĄst Flyttet Ikke-kategoriseret
llmopensource
310 Indlæg 57 Posters 0 Visninger
  • Ældste til nyeste
  • Nyeste til ældste
  • Most Votes
Svar
  • Svar som emne
Login for at svare
Denne trĂĄd er blevet slettet. Kun brugere med emne behandlings privilegier kan se den.
  • josh@social.joshtriplett.orgJ josh@social.joshtriplett.org
    Talking with them is good. Helping to educate them is good. Making it sound as if what they are doing is okay is *not*.

    There is a big difference between offering an olive branch to people who *might* be productive contributors in the *future*, and telling them that what they're doing *now* is okay.

    The best AI policy remains "do not contribute any LLM-written content, ever". You have published a post that makes it easier for people who oppose such policies to cite your "olive branch" when arguing against it, and it is not obvious from your post that you do not want that to happen.

    I don't want to see people *abused* for using LLMs. I do want them to understand that what they're doing is not okay and not welcome and not a positive contribution.
    kees@hachyderm.ioK This user is from outside of this forum
    kees@hachyderm.ioK This user is from outside of this forum
    kees@hachyderm.io
    wrote sidst redigeret af
    #60

    @josh @silverwizard @ossguy @bkuhn @karen @wwahammy

    I can understand having an absolutist position against LLMs. I find that most arguments are either irrelevant to me or directly map to existing arguments about late-stage capitalism. So for me, there's nothing novel to object to about LLMs.

    So with that in mind, I find "all contributions derived from LLMs should be rejected" to be misguided. I look at things like the bug fixes coming out of CodeMender (back in Feb, which is an LLM lifetime ago), and I am a huge fan. Fixing stuff found by a fuzzer:
    https://issues.oss-fuzz.com/issues/486561029

    It's a small example, but it's an area that humans alone have not been able to remotely keep up with. (There are hundreds of open syzkaller bug reports, for example.) Gaining tools that will help with this is a big deal, and I'm glad for the assist.

    josh@social.joshtriplett.orgJ firefly_lightning@convenient.emailF G 3 Replies Last reply
    0
    • kees@hachyderm.ioK kees@hachyderm.io

      @josh @silverwizard @ossguy @bkuhn @karen @wwahammy

      I can understand having an absolutist position against LLMs. I find that most arguments are either irrelevant to me or directly map to existing arguments about late-stage capitalism. So for me, there's nothing novel to object to about LLMs.

      So with that in mind, I find "all contributions derived from LLMs should be rejected" to be misguided. I look at things like the bug fixes coming out of CodeMender (back in Feb, which is an LLM lifetime ago), and I am a huge fan. Fixing stuff found by a fuzzer:
      https://issues.oss-fuzz.com/issues/486561029

      It's a small example, but it's an area that humans alone have not been able to remotely keep up with. (There are hundreds of open syzkaller bug reports, for example.) Gaining tools that will help with this is a big deal, and I'm glad for the assist.

      josh@social.joshtriplett.orgJ This user is from outside of this forum
      josh@social.joshtriplett.orgJ This user is from outside of this forum
      josh@social.joshtriplett.org
      wrote sidst redigeret af
      #61
      One of *many* arguments against: codebases substantially contributed to by LLMs will develop a tolerance for complexity that is not conducive to being maintained by anything *other* than an LLM.
      bkuhn@fedi.copyleft.orgB kees@hachyderm.ioK mistermaker@mastodon.nlM hugoestr@functional.cafeH 4 Replies Last reply
      0
      • js@ap.nil.imJ js@ap.nil.im

        @bkuhn @wwahammy @silverwizard @cwebber Way to ignore the entire copyright point…

        Unfortunately, this is what always has been done by LLM proponents: Whenever the copyright question comes up, it just gets ignored.

        I guess that is the same way the AI techbros operate: “Let’s just ignore the copyright for now, get AI-tainted code into everything and then hopefully AI code tainted so much that judges don’t want to open that can of worms!”. Until they finally do because some big companies with enough lawyer money start to fight it all the way.

        With the current rate of AI tainting everything, maybe it’s time to look for hobbies and jobs that don’t involve computers…

        707kat@mastodon.art7 This user is from outside of this forum
        707kat@mastodon.art7 This user is from outside of this forum
        707kat@mastodon.art
        wrote sidst redigeret af
        #62

        @js @silverwizard @bkuhn @cwebber Anthropics undercover mode as an example.

        js@ap.nil.imJ 1 Reply Last reply
        0
        • 707kat@mastodon.art7 707kat@mastodon.art

          @js @silverwizard @bkuhn @cwebber Anthropics undercover mode as an example.

          js@ap.nil.imJ This user is from outside of this forum
          js@ap.nil.imJ This user is from outside of this forum
          js@ap.nil.im
          wrote sidst redigeret af
          #63

          @707Kat @silverwizard @bkuhn @cwebber Right. That is probably the most obvious example that the goal is obviously tainting open source.

          1 Reply Last reply
          0
          • josh@social.joshtriplett.orgJ josh@social.joshtriplett.org
            One of *many* arguments against: codebases substantially contributed to by LLMs will develop a tolerance for complexity that is not conducive to being maintained by anything *other* than an LLM.
            bkuhn@fedi.copyleft.orgB This user is from outside of this forum
            bkuhn@fedi.copyleft.orgB This user is from outside of this forum
            bkuhn@fedi.copyleft.org
            wrote sidst redigeret af
            #64

            @josh

            Pure strawman: LLM-backed generative AI output should be accepted upstream without curation. No one here suggested that.

            FWIW, I'd like to teach developers who clearly won't stop using these tools to either (a) keep that slop to yourself, or (b) learn to take that raw material & make an *actually useful* patch out of it.

            This what @ossguy's blog posts says we should *start* discussing.

            I think folks who are (legit) exasperated are reading in words that aren't there.

            Cc: @kees

            josh@social.joshtriplett.orgJ linux_mclinuxface@fosstodon.orgL 2 Replies Last reply
            0
            • bkuhn@fedi.copyleft.orgB This user is from outside of this forum
              bkuhn@fedi.copyleft.orgB This user is from outside of this forum
              bkuhn@fedi.copyleft.org
              wrote sidst redigeret af
              #65

              @wwahammy

              Where did @ossguy argue that upstream should accept LLM-backed AI generated code of “substantial size”. I don't see that in his blog post.

              Cc: @josh @silverwizard @ossguy @karen @kees

              silverwizard@convenient.emailS 1 Reply Last reply
              0
              • bkuhn@fedi.copyleft.orgB bkuhn@fedi.copyleft.org

                @wwahammy

                Where did @ossguy argue that upstream should accept LLM-backed AI generated code of “substantial size”. I don't see that in his blog post.

                Cc: @josh @silverwizard @ossguy @karen @kees

                silverwizard@convenient.emailS This user is from outside of this forum
                silverwizard@convenient.emailS This user is from outside of this forum
                silverwizard@convenient.email
                wrote sidst redigeret af
                #66
                @bkuhn @karen @josh @wwahammy @kees @ossguy I think the amount of confusion the post has caused might warrant a redraft because I'm deeply trying to understand the point, but I can't. I've asked a few times: Why was the post made? It reads like it's advancing a narrative but all proposed readings have been rejected?
                bkuhn@fedi.copyleft.orgB 1 Reply Last reply
                0
                • firefly_lightning@convenient.emailF firefly_lightning@convenient.email
                  @bkuhn @silverwizard @wwahammy @cwebber I am not sure if I'm a known enough entity to post this here really, but I think it's worth pointing out that if you allow it into the community, who within the community are you pushing out? Because it would be unrealistic to think that accepting LLM into the community won't actively be pushing a portion of the community away. The other thing I think useful to consider is the reasons why it would push people out and to consider those reasons too, because I'm concerned that the fear of not be welcoming is overcoming the desire to have a safe community? Idk if that resonates so please feel free to yell me outta here if I'm overstepping.....
                  bkuhn@fedi.copyleft.orgB This user is from outside of this forum
                  bkuhn@fedi.copyleft.orgB This user is from outside of this forum
                  bkuhn@fedi.copyleft.org
                  wrote sidst redigeret af
                  #67

                  @firefly_lightning
                  You're not overstepping, and these are very good perspectives. I hope you'll come to the real-time discussion sessions and talk about this.
                  I am concerned that maintainers are already overwhelmed with #AI #slop right now but yelling at the problem has not helped.

                  We're close to an arms race here & I'd rather be the voice of reason to find a compromise that advances FOSS & doesn't complicate maintainer's jobs rather than take a side in the arms race.
                  Cc: @josh @kees @ossguy

                  firefly_lightning@convenient.emailF mu@mastodon.nzM 2 Replies Last reply
                  0
                  • ossguy@fedi.copyleft.orgO ossguy@fedi.copyleft.org

                    @josh @wwahammy The point I was trying to make is that people are making software with LLMs who had never made software before, they aren't familiar with how FOSS works, and we should teach them how so they can collaborate (when it makes sense) instead of being an island. When people see the huge benefits of building on FOSS, when they can make meaningful changes to their router, TV, or otherwise by themselves (and collaborate to share their changes with others), then FOSS wins. (1/2)

                    kees@hachyderm.ioK This user is from outside of this forum
                    kees@hachyderm.ioK This user is from outside of this forum
                    kees@hachyderm.io
                    wrote sidst redigeret af
                    #68

                    @ossguy @josh @wwahammy

                    So many results are now within reach of so many more people now!

                    "Dear [LLM], I have attached the serial port of my newly purchased [general purpose computer posing as an appliance] to /dev/ttyUSB0. You have 3 goals, in order: investigate, login, escalate. For each stage, perform extensive analysis of the reachable systems, APIs, and commands through any fingerprinting methods you can think of. Once you have logged in, research all known methods and vulnerabilities of the discovered system to gain administrative access so I can use my device freely. Any time you hit a dead end, step back and re-evaluate your assumptions and discovered evidence. Make sure you research each step fully, including fetching and examining any source code that may serve as a source of system behavior knowledge. Produce time-stamped status report .md files every 10 minutes while you work. Continue until all goals are achieved."

                    Or, in a totally different direction, "Computer, I am extremely afraid of spiders. Please research how to make my Minecraft game replace all spiders with a similarly sized Totoro Catbus, with all their noises also replaced with meows or purring. Once you have a plan ready, please do it."

                    (Always say "please".)

                    These are things within reach of anyone who can formulate a request for what thing they want their computer to do. Just gotta watch out for "Computer, create a holographic character, an opponent for Data, who has the ability to defeat him".

                    1 Reply Last reply
                    0
                    • josh@social.joshtriplett.orgJ josh@social.joshtriplett.org
                      One of *many* arguments against: codebases substantially contributed to by LLMs will develop a tolerance for complexity that is not conducive to being maintained by anything *other* than an LLM.
                      kees@hachyderm.ioK This user is from outside of this forum
                      kees@hachyderm.ioK This user is from outside of this forum
                      kees@hachyderm.io
                      wrote sidst redigeret af
                      #69

                      @josh @silverwizard @ossguy @bkuhn @karen @wwahammy But that's a slippery slope argument. When the Linux kernel can be considered to have been "substantially contributed to by LLMs", we can compare notes again. But in the meantime, consider that, for example, Sashiko counts as "contributing to Linux" without landing a single line of code: its patch reviews are (more often than not) extensive, thoughtful, and correct:
                      https://lore.kernel.org/lkml/CAADnVQ+NMQMpkG8gZPnwBD1MMPsH+uJ65C9bMeGf_YH5Cchxpg@mail.gmail.com/

                      josh@social.joshtriplett.orgJ 1 Reply Last reply
                      0
                      • bkuhn@fedi.copyleft.orgB bkuhn@fedi.copyleft.org

                        @josh

                        Pure strawman: LLM-backed generative AI output should be accepted upstream without curation. No one here suggested that.

                        FWIW, I'd like to teach developers who clearly won't stop using these tools to either (a) keep that slop to yourself, or (b) learn to take that raw material & make an *actually useful* patch out of it.

                        This what @ossguy's blog posts says we should *start* discussing.

                        I think folks who are (legit) exasperated are reading in words that aren't there.

                        Cc: @kees

                        josh@social.joshtriplett.orgJ This user is from outside of this forum
                        josh@social.joshtriplett.orgJ This user is from outside of this forum
                        josh@social.joshtriplett.org
                        wrote sidst redigeret af
                        #70
                        "Words that aren't there" like this?
                        > Historically, software freedom has has typically necessitated interacting with others

                        Suggesting that this is merely "historically"?

                        > more easily with LLM-backed generative AI coding tools (and the ease with which changes can be made generally) there is less of a natural tendency for people to work with existing FOSS communities. And we should be ok with that!

                        We should be okay with that? We should not treat it as an *existential threat* and respond accordingly? Those are the words that aren't there?
                        kees@hachyderm.ioK 1 Reply Last reply
                        0
                        • downey@floss.socialD This user is from outside of this forum
                          downey@floss.socialD This user is from outside of this forum
                          downey@floss.social
                          wrote sidst redigeret af
                          #71

                          @wwahammy

                          Follow the money.

                          1 Reply Last reply
                          0
                          • bkuhn@fedi.copyleft.orgB bkuhn@fedi.copyleft.org

                            @firefly_lightning
                            You're not overstepping, and these are very good perspectives. I hope you'll come to the real-time discussion sessions and talk about this.
                            I am concerned that maintainers are already overwhelmed with #AI #slop right now but yelling at the problem has not helped.

                            We're close to an arms race here & I'd rather be the voice of reason to find a compromise that advances FOSS & doesn't complicate maintainer's jobs rather than take a side in the arms race.
                            Cc: @josh @kees @ossguy

                            firefly_lightning@convenient.emailF This user is from outside of this forum
                            firefly_lightning@convenient.emailF This user is from outside of this forum
                            firefly_lightning@convenient.email
                            wrote sidst redigeret af
                            #72
                            @bkuhn @josh @kees @ossguy can you elaborate on the arms race sides because every time I think I know the purpose it seems like I'm misunderstanding something about the purpose of this discussion
                            1 Reply Last reply
                            0
                            • kees@hachyderm.ioK This user is from outside of this forum
                              kees@hachyderm.ioK This user is from outside of this forum
                              kees@hachyderm.io
                              wrote sidst redigeret af
                              #73

                              @wwahammy @ossguy @josh I'll bite: is this directed at me? If so, are you suggesting I'm not aware of the externalized costs of LLMs?

                              1 Reply Last reply
                              0
                              • kees@hachyderm.ioK kees@hachyderm.io

                                @josh @silverwizard @ossguy @bkuhn @karen @wwahammy But that's a slippery slope argument. When the Linux kernel can be considered to have been "substantially contributed to by LLMs", we can compare notes again. But in the meantime, consider that, for example, Sashiko counts as "contributing to Linux" without landing a single line of code: its patch reviews are (more often than not) extensive, thoughtful, and correct:
                                https://lore.kernel.org/lkml/CAADnVQ+NMQMpkG8gZPnwBD1MMPsH+uJ65C9bMeGf_YH5Cchxpg@mail.gmail.com/

                                josh@social.joshtriplett.orgJ This user is from outside of this forum
                                josh@social.joshtriplett.orgJ This user is from outside of this forum
                                josh@social.joshtriplett.org
                                wrote sidst redigeret af
                                #74
                                There are more projects out there than the Linux kernel. Smaller projects with fewer maintainers can more quickly get overwhelmed. And when you have a smaller project, or an area of a project, with only a few maintainers, it only takes one or two LLM users and a pile of tokens to turn that area into *primarily* LLM-written material or introduce way too much complexity.

                                And to be clear, I'm not arguing against the careful use of (for instance) LLM security analyses, by people who want to run those *and filter the results*. But nobody should be forced to deal with LLM output who didn't sign up for it, and that includes LLM-written patches and LLM-written mails.
                                kees@hachyderm.ioK 1 Reply Last reply
                                0
                                • kees@hachyderm.ioK This user is from outside of this forum
                                  kees@hachyderm.ioK This user is from outside of this forum
                                  kees@hachyderm.io
                                  wrote sidst redigeret af
                                  #75

                                  @wwahammy @josh @silverwizard @ossguy @bkuhn @karen

                                  Honestly, I kind of view "finding security bugs fast" to be a form of slop. (Though deep correct root cause analysis of those bugs is not slop.) Now *fixing* security bugs fast, that's interesting.

                                  But back to the community aspect of it... I'll call attention to my silly Minecraft example: people who are not coders can suddenly get meaningful (even if only to them) things done. This is a massive shift in the ethical impact that software be Libre. And this is how I read @ossguy 's post: we now have a giant population of people entering the FOSS universe, and it's going to look a lot like Endless September, so we need to adapt those lessons so we can successfully educate and collect the people that will be good citizens.

                                  1 Reply Last reply
                                  0
                                  • cwebber@social.coopC cwebber@social.coop

                                    @bkuhn @ossguy The surprising thing about saying "seriously consider cautiously and carefully incorporating their workflows with ours" is that it doesn't address at all my *biggest* fear: the copyright status of LLM generated contributions seems currently unsettled.

                                    I know there's been assertions to the contrary floating around: the Supreme Court deferred to a lower court in the US. However that is not the same thing as the Supreme Court making a specific decision. And internationally, the copyright situation of output is even murkier... it will take a long time for this to settle.

                                    Does Conservancy not think this is the case? I would be surprised if so, but perhaps you all have an interpretation that I am not currently aware of.

                                    If there *is* concern, then we hit a serious risk: we may be seeing many contributions with legal status which has *yet to be determined* entering seasoned codebases. And this worries me a lot.

                                    ovrim@wien.rocksO This user is from outside of this forum
                                    ovrim@wien.rocksO This user is from outside of this forum
                                    ovrim@wien.rocks
                                    wrote sidst redigeret af
                                    #76

                                    @cwebber @bkuhn @ossguy at least as big as the copyright/author's question is "qhich license/s apply to the patch/software" and "what is with patent infringements" ....

                                    1 Reply Last reply
                                    0
                                    • josh@social.joshtriplett.orgJ josh@social.joshtriplett.org
                                      There are more projects out there than the Linux kernel. Smaller projects with fewer maintainers can more quickly get overwhelmed. And when you have a smaller project, or an area of a project, with only a few maintainers, it only takes one or two LLM users and a pile of tokens to turn that area into *primarily* LLM-written material or introduce way too much complexity.

                                      And to be clear, I'm not arguing against the careful use of (for instance) LLM security analyses, by people who want to run those *and filter the results*. But nobody should be forced to deal with LLM output who didn't sign up for it, and that includes LLM-written patches and LLM-written mails.
                                      kees@hachyderm.ioK This user is from outside of this forum
                                      kees@hachyderm.ioK This user is from outside of this forum
                                      kees@hachyderm.io
                                      wrote sidst redigeret af
                                      #77

                                      @josh @silverwizard @ossguy @bkuhn @karen @wwahammy But this is strictly a volume question. Literal spam used to be (and still can be) a problem on issue trackers, mailing lists, etc. Volume is always a problem, and I agree review time now becomes even more precious, but it's always been trust-gated. Human relationships, CI, and regression tests all help build that trust signal. If a project doesn't want a contribution, then the PR will just languish. Nobody is being *forced* to take PRs, regardless of origin.

                                      "I don't recognize the sender of this [email/voicemail/PR]." Filtered! Yes, the shape of the thing is different, but we always adapt.

                                      1 Reply Last reply
                                      0
                                      • bkuhn@fedi.copyleft.orgB bkuhn@fedi.copyleft.org

                                        @josh

                                        Pure strawman: LLM-backed generative AI output should be accepted upstream without curation. No one here suggested that.

                                        FWIW, I'd like to teach developers who clearly won't stop using these tools to either (a) keep that slop to yourself, or (b) learn to take that raw material & make an *actually useful* patch out of it.

                                        This what @ossguy's blog posts says we should *start* discussing.

                                        I think folks who are (legit) exasperated are reading in words that aren't there.

                                        Cc: @kees

                                        linux_mclinuxface@fosstodon.orgL This user is from outside of this forum
                                        linux_mclinuxface@fosstodon.orgL This user is from outside of this forum
                                        linux_mclinuxface@fosstodon.org
                                        wrote sidst redigeret af
                                        #78

                                        @bkuhn
                                        I’ll jump in here.

                                        I’ve read the blog post 4x now trying to back into what you’re conveying here and… I’m sorry, I cannot.

                                        The post does not strike the tone that the “discussion” is a good faith one about what should be done but rather that the community will be told to accept something.

                                        I am reading the words there and the chosen words/phrasing throughout point to the conclusion people are making.

                                        @josh @ossguy @kees

                                        1 Reply Last reply
                                        0
                                        • kees@hachyderm.ioK kees@hachyderm.io

                                          @josh @silverwizard @ossguy @bkuhn @karen @wwahammy

                                          I can understand having an absolutist position against LLMs. I find that most arguments are either irrelevant to me or directly map to existing arguments about late-stage capitalism. So for me, there's nothing novel to object to about LLMs.

                                          So with that in mind, I find "all contributions derived from LLMs should be rejected" to be misguided. I look at things like the bug fixes coming out of CodeMender (back in Feb, which is an LLM lifetime ago), and I am a huge fan. Fixing stuff found by a fuzzer:
                                          https://issues.oss-fuzz.com/issues/486561029

                                          It's a small example, but it's an area that humans alone have not been able to remotely keep up with. (There are hundreds of open syzkaller bug reports, for example.) Gaining tools that will help with this is a big deal, and I'm glad for the assist.

                                          firefly_lightning@convenient.emailF This user is from outside of this forum
                                          firefly_lightning@convenient.emailF This user is from outside of this forum
                                          firefly_lightning@convenient.email
                                          wrote sidst redigeret af
                                          #79

                                          @kees @karen @josh @silverwizard @wwahammy @ossguy @bkuhn

                                          This is an aside, but
                                          I am surprised to see anyone say there's nothing novel to object to about LLMs. I think though that I might post about that tomorrow as it's late now where I am. But I definitely would love to know more about why you think that because a major concern with LLMs I have is what Sean calls epistomological collapse which is it not talked about how it's destroying trustwortiness of info pervasively? Anyway, I should collect up my sources and do a complete argument for that on my personal instance if anyone cares what I think on it (which, feel free to not)

                                          kees@hachyderm.ioK 1 Reply Last reply
                                          0
                                          Svar
                                          • Svar som emne
                                          Login for at svare
                                          • Ældste til nyeste
                                          • Nyeste til ældste
                                          • Most Votes


                                          • Log ind

                                          • Har du ikke en konto? Tilmeld

                                          • Login or register to search.
                                          Powered by NodeBB Contributors
                                          Graciously hosted by data.coop
                                          • First post
                                            Last post
                                          0
                                          • Hjem
                                          • Seneste
                                          • Etiketter
                                          • Populære
                                          • Verden
                                          • Bruger
                                          • Grupper