Skip to content
  • Hjem
  • Seneste
  • Etiketter
  • Populære
  • Verden
  • Bruger
  • Grupper
Temaer
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Kollaps
FARVEL BIG TECH
  1. Forside
  2. Ikke-kategoriseret
  3. 👀 … https://sfconservancy.org/blog/2026/apr/15/eternal-november-generative-ai-llm/ …my colleague Denver Gingerich writes: newcomers' extensive reliance on LLM-backed generative AI is comparable to the Eternal September onslaught to USENET in 1993.

👀 … https://sfconservancy.org/blog/2026/apr/15/eternal-november-generative-ai-llm/ …my colleague Denver Gingerich writes: newcomers' extensive reliance on LLM-backed generative AI is comparable to the Eternal September onslaught to USENET in 1993.

Planlagt Fastgjort LÃ¥st Flyttet Ikke-kategoriseret
llmopensource
310 Indlæg 57 Posters 0 Visninger
  • Ældste til nyeste
  • Nyeste til ældste
  • Most Votes
Svar
  • Svar som emne
Login for at svare
Denne tråd er blevet slettet. Kun brugere med emne behandlings privilegier kan se den.
  • bkuhn@fedi.copyleft.orgB bkuhn@fedi.copyleft.org

    @mathieui
    I agree FOSS projects should make their own policies. Some will (& should!) have a zero-tolerance abstinence policy on any contribution that has even been slightly interacted with any LLM-backed generative AI systems.
    Yet, even among SFC projects, some asked us to help them create a more nuanced policy.
    Should we just kick those projects out of SFC, or have a nuanced, humans-only conversation?
    It's ok if you do not want to join that, but we'd also be glad to have you.
    Cc: @tito @ossguy

    mathieui@piaille.frM This user is from outside of this forum
    mathieui@piaille.frM This user is from outside of this forum
    mathieui@piaille.fr
    wrote sidst redigeret af
    #116

    @bkuhn @tito @ossguy I understand the need and do not intend to throw stones at the SFC here at all, I have diverging ethical considerations and am way too tired of it all (particularly writing non-FOSS software at work in the current LLM-crazed atmosphere) to even think about joining an oral conversation about it, in a language I am somewhat fluent but not articulate in.

    I'm all for welcoming volunteers who want to do work on FOSS projects, but that means onboarding and doing actual work; if I wanted to run Claude on my code to do stuff, I don't need other people to do that, so what would be the point of recruiting volunteers ?

    bkuhn@fedi.copyleft.orgB 1 Reply Last reply
    0
    • mathieui@piaille.frM mathieui@piaille.fr

      @bkuhn @tito @ossguy I understand the need and do not intend to throw stones at the SFC here at all, I have diverging ethical considerations and am way too tired of it all (particularly writing non-FOSS software at work in the current LLM-crazed atmosphere) to even think about joining an oral conversation about it, in a language I am somewhat fluent but not articulate in.

      I'm all for welcoming volunteers who want to do work on FOSS projects, but that means onboarding and doing actual work; if I wanted to run Claude on my code to do stuff, I don't need other people to do that, so what would be the point of recruiting volunteers ?

      bkuhn@fedi.copyleft.orgB This user is from outside of this forum
      bkuhn@fedi.copyleft.orgB This user is from outside of this forum
      bkuhn@fedi.copyleft.org
      wrote sidst redigeret af
      #117

      @mathieui

      Actually, I'm absolutely 🤮y re: talking about LLM-backed generative AI too! I've been talking about it for 4 years now 😩:
      https://sfconservancy.org/blog/2022/feb/03/github-copilot-copyleft-gpl/

      But, I'm senior policy wonk in FOSS, & it's my day job. Everyone has crap they gotta do in their day job that isn't their favorite, & this is mine.

      Speaking of bad stuff at day jobs: many people's day jobs MANDATE LLM-backed AI usage. Such a mandate is definitely immoral; it should always be the developers' choice.

      Cc: @tito @ossguy

      1 Reply Last reply
      0
      • cwebber@social.coopC cwebber@social.coop

        @bkuhn @ossguy The surprising thing about saying "seriously consider cautiously and carefully incorporating their workflows with ours" is that it doesn't address at all my *biggest* fear: the copyright status of LLM generated contributions seems currently unsettled.

        I know there's been assertions to the contrary floating around: the Supreme Court deferred to a lower court in the US. However that is not the same thing as the Supreme Court making a specific decision. And internationally, the copyright situation of output is even murkier... it will take a long time for this to settle.

        Does Conservancy not think this is the case? I would be surprised if so, but perhaps you all have an interpretation that I am not currently aware of.

        If there *is* concern, then we hit a serious risk: we may be seeing many contributions with legal status which has *yet to be determined* entering seasoned codebases. And this worries me a lot.

        richardfontana@mastodon.socialR This user is from outside of this forum
        richardfontana@mastodon.socialR This user is from outside of this forum
        richardfontana@mastodon.social
        wrote sidst redigeret af
        #118

        @cwebber I truly don't think this is a new situation @bkuhn @ossguy

        cwebber@social.coopC 1 Reply Last reply
        0
        • richardfontana@mastodon.socialR richardfontana@mastodon.social

          @cwebber I truly don't think this is a new situation @bkuhn @ossguy

          cwebber@social.coopC This user is from outside of this forum
          cwebber@social.coopC This user is from outside of this forum
          cwebber@social.coop
          wrote sidst redigeret af
          #119

          @richardfontana @bkuhn @ossguy In which of the 5 million ways I could parse that sentence do you mean it?

          bkuhn@fedi.copyleft.orgB 1 Reply Last reply
          0
          • lumi@snug.moeL lumi@snug.moe

            @bkuhn @mathieui @tito @ossguy i think it is good to have a nuanced conversation, but still be stern in that this unethical technology will not be allowed. the ethical issues of it are just too big, it would almost be as bad as allowing proprietary software in, i would say

            education is important, and it is important to first educate and give some time before making a decision, but still be stern about it, as this is a deep ethical issue where we should be having a zero-tolerance

            zero-tolerance here would mean not allowing the project to endorse or use any genai. if usage of it is snuck in, try and revert it to the best ability possible. if it was used before, do the same. but having some genai commits in is not that important, to me

            of course, mistakes may be made. we should not be scrutinizing commits very heavily and going on witch hunts. but genai usage, for code, assets, writing, docs and anything else, must not be allowed

            what's important to me is the stance of the project going forward. to be against it completely

            bkuhn@fedi.copyleft.orgB This user is from outside of this forum
            bkuhn@fedi.copyleft.orgB This user is from outside of this forum
            bkuhn@fedi.copyleft.org
            wrote sidst redigeret af
            #120

            @lumi

            These are good ideas. I hope you can come to one of the chats and share them, but I've bookmarked your post so I am sure the ideas get considered.

            cc: @mathieui @tito @ossguy

            lumi@snug.moeL 1 Reply Last reply
            0
            • bkuhn@fedi.copyleft.orgB This user is from outside of this forum
              bkuhn@fedi.copyleft.orgB This user is from outside of this forum
              bkuhn@fedi.copyleft.org
              wrote sidst redigeret af
              #121

              @hipsterelectron

              First, I speak for myself, not SFC on this account. I work for SFC, but my words are not SFC's words by default. I *often* am unable to convince SFC to take policies or positions that I want.

              By nuanced, I mean avoiding two sides showing up like it's a protest where one site shouts "NO AI" and the other side shouts "ALL AI ALL THE TIME" won't get us anywhere at all.

              I'm very close to the "NO AI" side, but I'm a few steps toward other direction.

              Cc: @mathieui @tito @ossguy

              1 Reply Last reply
              0
              • bkuhn@fedi.copyleft.orgB This user is from outside of this forum
                bkuhn@fedi.copyleft.orgB This user is from outside of this forum
                bkuhn@fedi.copyleft.org
                wrote sidst redigeret af
                #122

                @davidgerard

                Normally, when someone is invited to a real-time public forum and would prefer to submit written comments, they would ask how to do so rather than being sarcastic and cruel. How odd. 😛

                In seriousness, in parallel @ossguy added info on just what you're asking for at the top of https://sfconservancy.org/blog/2026/apr/15/eternal-november-generative-ai-llm/

                1 Reply Last reply
                0
                • cwebber@social.coopC cwebber@social.coop

                  @richardfontana @bkuhn @ossguy In which of the 5 million ways I could parse that sentence do you mean it?

                  bkuhn@fedi.copyleft.orgB This user is from outside of this forum
                  bkuhn@fedi.copyleft.orgB This user is from outside of this forum
                  bkuhn@fedi.copyleft.org
                  wrote sidst redigeret af
                  #123

                  @cwebber May I please introduce you to the cryptic @richardfontana oracle.

                  He's often right and predicts the future well, but figuring out what he means is the riddle.

                  😆

                  I've had this moment with @richardfontana more than a dozen times at least. ☺

                  1 Reply Last reply
                  0
                  • kees@hachyderm.ioK kees@hachyderm.io

                    @josh @silverwizard @ossguy @bkuhn @karen @wwahammy

                    I can understand having an absolutist position against LLMs. I find that most arguments are either irrelevant to me or directly map to existing arguments about late-stage capitalism. So for me, there's nothing novel to object to about LLMs.

                    So with that in mind, I find "all contributions derived from LLMs should be rejected" to be misguided. I look at things like the bug fixes coming out of CodeMender (back in Feb, which is an LLM lifetime ago), and I am a huge fan. Fixing stuff found by a fuzzer:
                    https://issues.oss-fuzz.com/issues/486561029

                    It's a small example, but it's an area that humans alone have not been able to remotely keep up with. (There are hundreds of open syzkaller bug reports, for example.) Gaining tools that will help with this is a big deal, and I'm glad for the assist.

                    G This user is from outside of this forum
                    G This user is from outside of this forum
                    glitzersachen@hachyderm.io
                    wrote sidst redigeret af
                    #124

                    @kees @josh @silverwizard @ossguy @bkuhn @karen @wwahammy

                    > find that most arguments are either irrelevant to me or directly map to existing arguments about late-stage capitalism.

                    Where does "LLMs are cognito hazards" (@xgranade) fit in?

                    Asking for a friend.

                    kees@hachyderm.ioK 1 Reply Last reply
                    0
                    • G glitzersachen@hachyderm.io

                      @kees @josh @silverwizard @ossguy @bkuhn @karen @wwahammy

                      > find that most arguments are either irrelevant to me or directly map to existing arguments about late-stage capitalism.

                      Where does "LLMs are cognito hazards" (@xgranade) fit in?

                      Asking for a friend.

                      kees@hachyderm.ioK This user is from outside of this forum
                      kees@hachyderm.ioK This user is from outside of this forum
                      kees@hachyderm.io
                      wrote sidst redigeret af
                      #125

                      @glitzersachen @josh @silverwizard @ossguy @bkuhn @karen @wwahammy @xgranade

                      I consider the cognition impairment hazards to overlap with the existing manipulation/critical-thinking hazards that capitalism depends on, with advertising being probably the most dangerous example (both explicit and implicit manipulation of many cognitive systems: confidence, selection, recency, etc etc).

                      IMHO LLMs are "just" a subset/extension of this existing problem. And I categorize it there because I think the defenses against their negative impacts are very similar.

                      G wwahammy@social.treehouse.systemsW 2 Replies Last reply
                      0
                      • kees@hachyderm.ioK kees@hachyderm.io

                        @glitzersachen @josh @silverwizard @ossguy @bkuhn @karen @wwahammy @xgranade

                        I consider the cognition impairment hazards to overlap with the existing manipulation/critical-thinking hazards that capitalism depends on, with advertising being probably the most dangerous example (both explicit and implicit manipulation of many cognitive systems: confidence, selection, recency, etc etc).

                        IMHO LLMs are "just" a subset/extension of this existing problem. And I categorize it there because I think the defenses against their negative impacts are very similar.

                        G This user is from outside of this forum
                        G This user is from outside of this forum
                        glitzersachen@hachyderm.io
                        wrote sidst redigeret af
                        #126

                        @kees @josh @silverwizard @ossguy @bkuhn @karen @wwahammy @xgranade

                        > to overlap with the existing manipulation/critical-thinking hazards that capitalism

                        I think it's more, not only the manipulation part. LLMs actively corrode skills of the users. Not by by not using them. No, actually worse.

                        I hope you have heard about this possibility (whether you believe in it or not).

                        kees@hachyderm.ioK 1 Reply Last reply
                        0
                        • kees@hachyderm.ioK kees@hachyderm.io

                          @glitzersachen @josh @silverwizard @ossguy @bkuhn @karen @wwahammy @xgranade

                          I consider the cognition impairment hazards to overlap with the existing manipulation/critical-thinking hazards that capitalism depends on, with advertising being probably the most dangerous example (both explicit and implicit manipulation of many cognitive systems: confidence, selection, recency, etc etc).

                          IMHO LLMs are "just" a subset/extension of this existing problem. And I categorize it there because I think the defenses against their negative impacts are very similar.

                          wwahammy@social.treehouse.systemsW This user is from outside of this forum
                          wwahammy@social.treehouse.systemsW This user is from outside of this forum
                          wwahammy@social.treehouse.systems
                          wrote sidst redigeret af
                          #127

                          @kees @glitzersachen @josh @silverwizard @ossguy @bkuhn @karen @xgranade I think you are wildly underestimating the cognitive hazards. Like I hesitate to even say "wildly underestimating" because that phrase is not strong enough.

                          bkuhn@fedi.copyleft.orgB kees@hachyderm.ioK 2 Replies Last reply
                          0
                          • wwahammy@social.treehouse.systemsW wwahammy@social.treehouse.systems

                            @kees @glitzersachen @josh @silverwizard @ossguy @bkuhn @karen @xgranade I think you are wildly underestimating the cognitive hazards. Like I hesitate to even say "wildly underestimating" because that phrase is not strong enough.

                            bkuhn@fedi.copyleft.orgB This user is from outside of this forum
                            bkuhn@fedi.copyleft.orgB This user is from outside of this forum
                            bkuhn@fedi.copyleft.org
                            wrote sidst redigeret af
                            #128

                            @wwahammy @kees IMO you're both right.
                            LLM-backed gen. AI is a dangerous tool w/ potential to not only atrophy the skillsets of experienced developers *but also* lead newcomers to *never develop those skills*.
                            Our charge is to create policies that encourage extremely disciplined use of these systems.
                            I support decriminalization of recreational substances. But, such has to come with major funding for addiction support. IMO the analogy is apt.
                            @glitzersachen @josh @silverwizard @ossguy @xgranade

                            G wwahammy@social.treehouse.systemsW 2 Replies Last reply
                            0
                            • G glitzersachen@hachyderm.io

                              @kees @josh @silverwizard @ossguy @bkuhn @karen @wwahammy @xgranade

                              > to overlap with the existing manipulation/critical-thinking hazards that capitalism

                              I think it's more, not only the manipulation part. LLMs actively corrode skills of the users. Not by by not using them. No, actually worse.

                              I hope you have heard about this possibility (whether you believe in it or not).

                              kees@hachyderm.ioK This user is from outside of this forum
                              kees@hachyderm.ioK This user is from outside of this forum
                              kees@hachyderm.io
                              wrote sidst redigeret af
                              #129

                              @glitzersachen @josh @silverwizard @ossguy @bkuhn @karen @wwahammy @xgranade

                              > LLMs actively corrode skills of the users

                              Yup, very aware. It's a specific instance of what I still see as a larger critical thinking erosion happening all around us.

                              G 1 Reply Last reply
                              0
                              • bkuhn@fedi.copyleft.orgB bkuhn@fedi.copyleft.org

                                @lumi

                                These are good ideas. I hope you can come to one of the chats and share them, but I've bookmarked your post so I am sure the ideas get considered.

                                cc: @mathieui @tito @ossguy

                                lumi@snug.moeL This user is from outside of this forum
                                lumi@snug.moeL This user is from outside of this forum
                                lumi@snug.moe
                                wrote sidst redigeret af
                                #130

                                @bkuhn @mathieui @tito @ossguy thanks for the invitation, i'll try! ​​

                                1 Reply Last reply
                                0
                                • kees@hachyderm.ioK kees@hachyderm.io

                                  @glitzersachen @josh @silverwizard @ossguy @bkuhn @karen @wwahammy @xgranade

                                  > LLMs actively corrode skills of the users

                                  Yup, very aware. It's a specific instance of what I still see as a larger critical thinking erosion happening all around us.

                                  G This user is from outside of this forum
                                  G This user is from outside of this forum
                                  glitzersachen@hachyderm.io
                                  wrote sidst redigeret af
                                  #131

                                  @kees @josh @silverwizard @ossguy @bkuhn @karen @wwahammy @xgranade

                                  I suspect (as some scientists do as well) that this is not a cultural, but a neurological phenomenon. So I think this is really on a totally different level.

                                  The skill erosion I am talking about has nothing to do AT ALL with critical thinking.

                                  Best case it's just that if of two synaptic circuits (use the translation tool vs retrieve from memory) the one which wanted to activate, but then got not chosen, is actively deleted or weakened. My understanding is that this is how biological brains work / learn.

                                  The worse alternative is that the output of LLMs has some yet not sufficiently described hidden quality which poisons neuronal networks that process them.

                                  One hint in this direction is, that LLM models, who consume the production of other LLM models in training, collapse. That's a clear indicator that on some level LLM output is observably different from human language production, though we as humans have on average a hard time telling the difference.

                                  And this is what I am talking about: Not a loss of cultural techniques or of learned skill by atrophy or not being taught anymore, but the poisoning of neuronal networks by input they cannot firewall because they have not evolved to recognize it as a hazard.

                                  kees@hachyderm.ioK 1 Reply Last reply
                                  0
                                  • bkuhn@fedi.copyleft.orgB bkuhn@fedi.copyleft.org

                                    @wwahammy @kees IMO you're both right.
                                    LLM-backed gen. AI is a dangerous tool w/ potential to not only atrophy the skillsets of experienced developers *but also* lead newcomers to *never develop those skills*.
                                    Our charge is to create policies that encourage extremely disciplined use of these systems.
                                    I support decriminalization of recreational substances. But, such has to come with major funding for addiction support. IMO the analogy is apt.
                                    @glitzersachen @josh @silverwizard @ossguy @xgranade

                                    G This user is from outside of this forum
                                    G This user is from outside of this forum
                                    glitzersachen@hachyderm.io
                                    wrote sidst redigeret af
                                    #132

                                    @bkuhn @wwahammy @kees @josh @silverwizard @ossguy @xgranade

                                    Let me point you to my reply here => https://hachyderm.io/@glitzersachen/116421481982246037.

                                    I really think the issue at the core *might* not be loosing skills by neglecting to exercise them, but rather poisoning of neural networks. Brainwashing them into (skill) oblivion.

                                    The comparison to hard drugs would be apt, if this is true.

                                    And our employers want us to ruin our skills and our brains. They obviously don't believe in a common future with their knowledge workers anymore...

                                    1 Reply Last reply
                                    0
                                    • wwahammy@social.treehouse.systemsW wwahammy@social.treehouse.systems

                                      @kees @glitzersachen @josh @silverwizard @ossguy @bkuhn @karen @xgranade I think you are wildly underestimating the cognitive hazards. Like I hesitate to even say "wildly underestimating" because that phrase is not strong enough.

                                      kees@hachyderm.ioK This user is from outside of this forum
                                      kees@hachyderm.ioK This user is from outside of this forum
                                      kees@hachyderm.io
                                      wrote sidst redigeret af
                                      #133

                                      @wwahammy @glitzersachen @josh @silverwizard @ossguy @bkuhn @karen @xgranade

                                      Right; it is an extremely focused risk (differing from the larger varieties and sources of critical thinking erosion). And every piece of research I've seen with regard to "how to safely use LLMs in education" confirms this with bright flashing lights: there is none. LLMs appear to have a universally negative impact in education.

                                      1 Reply Last reply
                                      0
                                      • bkuhn@fedi.copyleft.orgB bkuhn@fedi.copyleft.org

                                        @wwahammy @kees IMO you're both right.
                                        LLM-backed gen. AI is a dangerous tool w/ potential to not only atrophy the skillsets of experienced developers *but also* lead newcomers to *never develop those skills*.
                                        Our charge is to create policies that encourage extremely disciplined use of these systems.
                                        I support decriminalization of recreational substances. But, such has to come with major funding for addiction support. IMO the analogy is apt.
                                        @glitzersachen @josh @silverwizard @ossguy @xgranade

                                        wwahammy@social.treehouse.systemsW This user is from outside of this forum
                                        wwahammy@social.treehouse.systemsW This user is from outside of this forum
                                        wwahammy@social.treehouse.systems
                                        wrote sidst redigeret af
                                        #134

                                        @bkuhn @kees @glitzersachen @josh @silverwizard @ossguy @xgranade

                                        This is not a remotely accurate analogy. The level of rage in this country over AI is uncontrollable and it's accelerating. Two people tried to kill Sam Altman in the last week. An Indiana planning official's house was shot after they approved a new data center.

                                        In the political realm, the shift is unimaginably swift. Ex: 6 months ago, no Democrat for WI governor had a policy on data centers because building unions wanted them. Now every one of them is fighting over how strict their ban on data centers is.

                                        The best analogy I think of is the opioid crisis. When people were ready to kill the Sacklers and everyone at Purdue Pharma, you can't come in and say anything that people think you are tolerant of the damage. You can't even argue "we can punish these people but we have to protect access to opioids". Everyone KNOWS there are uses but you can't build a policy around that because the public doesn't care. At all.

                                        The only time you can have this discussion was years ago or years in the future after the public has taken their pound of flesh. Right now, it's an immensely dangerous idea for SFC.

                                        mu@mastodon.nzM 1 Reply Last reply
                                        0
                                        • josh@social.joshtriplett.orgJ josh@social.joshtriplett.org
                                          Talking with them is good. Helping to educate them is good. Making it sound as if what they are doing is okay is *not*.

                                          There is a big difference between offering an olive branch to people who *might* be productive contributors in the *future*, and telling them that what they're doing *now* is okay.

                                          The best AI policy remains "do not contribute any LLM-written content, ever". You have published a post that makes it easier for people who oppose such policies to cite your "olive branch" when arguing against it, and it is not obvious from your post that you do not want that to happen.

                                          I don't want to see people *abused* for using LLMs. I do want them to understand that what they're doing is not okay and not welcome and not a positive contribution.
                                          G This user is from outside of this forum
                                          G This user is from outside of this forum
                                          glitzersachen@hachyderm.io
                                          wrote sidst redigeret af
                                          #135

                                          @josh @silverwizard @ossguy @bkuhn @karen @wwahammy

                                          As far as I am concerned people should be "abused" for shilling AI from a position where they really don't have any sufficient insight. Like middle management trying to push AI on reluctant software engineers with all the tricks in the book (for example tying performance review results to AI use). This behavior destroys trust and workplace culture. What do they think? That the engineers don't understand their own work mode? The hubris of management: "I'll tell you how you can work better. I know better how you can work better."

                                          And this behavior needs to be called out.

                                          josh@social.joshtriplett.orgJ 1 Reply Last reply
                                          0
                                          Svar
                                          • Svar som emne
                                          Login for at svare
                                          • Ældste til nyeste
                                          • Nyeste til ældste
                                          • Most Votes


                                          • Log ind

                                          • Har du ikke en konto? Tilmeld

                                          • Login or register to search.
                                          Powered by NodeBB Contributors
                                          Graciously hosted by data.coop
                                          • First post
                                            Last post
                                          0
                                          • Hjem
                                          • Seneste
                                          • Etiketter
                                          • Populære
                                          • Verden
                                          • Bruger
                                          • Grupper