Skip to content
  • Hjem
  • Seneste
  • Etiketter
  • Populære
  • Verden
  • Bruger
  • Grupper
Temaer
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Kollaps
FARVEL BIG TECH
  1. Forside
  2. Ikke-kategoriseret
  3. 👀 … https://sfconservancy.org/blog/2026/apr/15/eternal-november-generative-ai-llm/ …my colleague Denver Gingerich writes: newcomers' extensive reliance on LLM-backed generative AI is comparable to the Eternal September onslaught to USENET in 1993.

👀 … https://sfconservancy.org/blog/2026/apr/15/eternal-november-generative-ai-llm/ …my colleague Denver Gingerich writes: newcomers' extensive reliance on LLM-backed generative AI is comparable to the Eternal September onslaught to USENET in 1993.

Planlagt Fastgjort LÃ¥st Flyttet Ikke-kategoriseret
llmopensource
310 Indlæg 57 Posters 0 Visninger
  • Ældste til nyeste
  • Nyeste til ældste
  • Most Votes
Svar
  • Svar som emne
Login for at svare
Denne tråd er blevet slettet. Kun brugere med emne behandlings privilegier kan se den.
  • kees@hachyderm.ioK kees@hachyderm.io

    @glitzersachen @josh @silverwizard @ossguy @bkuhn @karen @wwahammy @xgranade

    I consider the cognition impairment hazards to overlap with the existing manipulation/critical-thinking hazards that capitalism depends on, with advertising being probably the most dangerous example (both explicit and implicit manipulation of many cognitive systems: confidence, selection, recency, etc etc).

    IMHO LLMs are "just" a subset/extension of this existing problem. And I categorize it there because I think the defenses against their negative impacts are very similar.

    wwahammy@social.treehouse.systemsW This user is from outside of this forum
    wwahammy@social.treehouse.systemsW This user is from outside of this forum
    wwahammy@social.treehouse.systems
    wrote sidst redigeret af
    #127

    @kees @glitzersachen @josh @silverwizard @ossguy @bkuhn @karen @xgranade I think you are wildly underestimating the cognitive hazards. Like I hesitate to even say "wildly underestimating" because that phrase is not strong enough.

    bkuhn@fedi.copyleft.orgB kees@hachyderm.ioK 2 Replies Last reply
    0
    • wwahammy@social.treehouse.systemsW wwahammy@social.treehouse.systems

      @kees @glitzersachen @josh @silverwizard @ossguy @bkuhn @karen @xgranade I think you are wildly underestimating the cognitive hazards. Like I hesitate to even say "wildly underestimating" because that phrase is not strong enough.

      bkuhn@fedi.copyleft.orgB This user is from outside of this forum
      bkuhn@fedi.copyleft.orgB This user is from outside of this forum
      bkuhn@fedi.copyleft.org
      wrote sidst redigeret af
      #128

      @wwahammy @kees IMO you're both right.
      LLM-backed gen. AI is a dangerous tool w/ potential to not only atrophy the skillsets of experienced developers *but also* lead newcomers to *never develop those skills*.
      Our charge is to create policies that encourage extremely disciplined use of these systems.
      I support decriminalization of recreational substances. But, such has to come with major funding for addiction support. IMO the analogy is apt.
      @glitzersachen @josh @silverwizard @ossguy @xgranade

      G wwahammy@social.treehouse.systemsW 2 Replies Last reply
      0
      • G glitzersachen@hachyderm.io

        @kees @josh @silverwizard @ossguy @bkuhn @karen @wwahammy @xgranade

        > to overlap with the existing manipulation/critical-thinking hazards that capitalism

        I think it's more, not only the manipulation part. LLMs actively corrode skills of the users. Not by by not using them. No, actually worse.

        I hope you have heard about this possibility (whether you believe in it or not).

        kees@hachyderm.ioK This user is from outside of this forum
        kees@hachyderm.ioK This user is from outside of this forum
        kees@hachyderm.io
        wrote sidst redigeret af
        #129

        @glitzersachen @josh @silverwizard @ossguy @bkuhn @karen @wwahammy @xgranade

        > LLMs actively corrode skills of the users

        Yup, very aware. It's a specific instance of what I still see as a larger critical thinking erosion happening all around us.

        G 1 Reply Last reply
        0
        • bkuhn@fedi.copyleft.orgB bkuhn@fedi.copyleft.org

          @lumi

          These are good ideas. I hope you can come to one of the chats and share them, but I've bookmarked your post so I am sure the ideas get considered.

          cc: @mathieui @tito @ossguy

          lumi@snug.moeL This user is from outside of this forum
          lumi@snug.moeL This user is from outside of this forum
          lumi@snug.moe
          wrote sidst redigeret af
          #130

          @bkuhn @mathieui @tito @ossguy thanks for the invitation, i'll try! ​​

          1 Reply Last reply
          0
          • kees@hachyderm.ioK kees@hachyderm.io

            @glitzersachen @josh @silverwizard @ossguy @bkuhn @karen @wwahammy @xgranade

            > LLMs actively corrode skills of the users

            Yup, very aware. It's a specific instance of what I still see as a larger critical thinking erosion happening all around us.

            G This user is from outside of this forum
            G This user is from outside of this forum
            glitzersachen@hachyderm.io
            wrote sidst redigeret af
            #131

            @kees @josh @silverwizard @ossguy @bkuhn @karen @wwahammy @xgranade

            I suspect (as some scientists do as well) that this is not a cultural, but a neurological phenomenon. So I think this is really on a totally different level.

            The skill erosion I am talking about has nothing to do AT ALL with critical thinking.

            Best case it's just that if of two synaptic circuits (use the translation tool vs retrieve from memory) the one which wanted to activate, but then got not chosen, is actively deleted or weakened. My understanding is that this is how biological brains work / learn.

            The worse alternative is that the output of LLMs has some yet not sufficiently described hidden quality which poisons neuronal networks that process them.

            One hint in this direction is, that LLM models, who consume the production of other LLM models in training, collapse. That's a clear indicator that on some level LLM output is observably different from human language production, though we as humans have on average a hard time telling the difference.

            And this is what I am talking about: Not a loss of cultural techniques or of learned skill by atrophy or not being taught anymore, but the poisoning of neuronal networks by input they cannot firewall because they have not evolved to recognize it as a hazard.

            kees@hachyderm.ioK 1 Reply Last reply
            0
            • bkuhn@fedi.copyleft.orgB bkuhn@fedi.copyleft.org

              @wwahammy @kees IMO you're both right.
              LLM-backed gen. AI is a dangerous tool w/ potential to not only atrophy the skillsets of experienced developers *but also* lead newcomers to *never develop those skills*.
              Our charge is to create policies that encourage extremely disciplined use of these systems.
              I support decriminalization of recreational substances. But, such has to come with major funding for addiction support. IMO the analogy is apt.
              @glitzersachen @josh @silverwizard @ossguy @xgranade

              G This user is from outside of this forum
              G This user is from outside of this forum
              glitzersachen@hachyderm.io
              wrote sidst redigeret af
              #132

              @bkuhn @wwahammy @kees @josh @silverwizard @ossguy @xgranade

              Let me point you to my reply here => https://hachyderm.io/@glitzersachen/116421481982246037.

              I really think the issue at the core *might* not be loosing skills by neglecting to exercise them, but rather poisoning of neural networks. Brainwashing them into (skill) oblivion.

              The comparison to hard drugs would be apt, if this is true.

              And our employers want us to ruin our skills and our brains. They obviously don't believe in a common future with their knowledge workers anymore...

              1 Reply Last reply
              0
              • wwahammy@social.treehouse.systemsW wwahammy@social.treehouse.systems

                @kees @glitzersachen @josh @silverwizard @ossguy @bkuhn @karen @xgranade I think you are wildly underestimating the cognitive hazards. Like I hesitate to even say "wildly underestimating" because that phrase is not strong enough.

                kees@hachyderm.ioK This user is from outside of this forum
                kees@hachyderm.ioK This user is from outside of this forum
                kees@hachyderm.io
                wrote sidst redigeret af
                #133

                @wwahammy @glitzersachen @josh @silverwizard @ossguy @bkuhn @karen @xgranade

                Right; it is an extremely focused risk (differing from the larger varieties and sources of critical thinking erosion). And every piece of research I've seen with regard to "how to safely use LLMs in education" confirms this with bright flashing lights: there is none. LLMs appear to have a universally negative impact in education.

                1 Reply Last reply
                0
                • bkuhn@fedi.copyleft.orgB bkuhn@fedi.copyleft.org

                  @wwahammy @kees IMO you're both right.
                  LLM-backed gen. AI is a dangerous tool w/ potential to not only atrophy the skillsets of experienced developers *but also* lead newcomers to *never develop those skills*.
                  Our charge is to create policies that encourage extremely disciplined use of these systems.
                  I support decriminalization of recreational substances. But, such has to come with major funding for addiction support. IMO the analogy is apt.
                  @glitzersachen @josh @silverwizard @ossguy @xgranade

                  wwahammy@social.treehouse.systemsW This user is from outside of this forum
                  wwahammy@social.treehouse.systemsW This user is from outside of this forum
                  wwahammy@social.treehouse.systems
                  wrote sidst redigeret af
                  #134

                  @bkuhn @kees @glitzersachen @josh @silverwizard @ossguy @xgranade

                  This is not a remotely accurate analogy. The level of rage in this country over AI is uncontrollable and it's accelerating. Two people tried to kill Sam Altman in the last week. An Indiana planning official's house was shot after they approved a new data center.

                  In the political realm, the shift is unimaginably swift. Ex: 6 months ago, no Democrat for WI governor had a policy on data centers because building unions wanted them. Now every one of them is fighting over how strict their ban on data centers is.

                  The best analogy I think of is the opioid crisis. When people were ready to kill the Sacklers and everyone at Purdue Pharma, you can't come in and say anything that people think you are tolerant of the damage. You can't even argue "we can punish these people but we have to protect access to opioids". Everyone KNOWS there are uses but you can't build a policy around that because the public doesn't care. At all.

                  The only time you can have this discussion was years ago or years in the future after the public has taken their pound of flesh. Right now, it's an immensely dangerous idea for SFC.

                  mu@mastodon.nzM 1 Reply Last reply
                  0
                  • josh@social.joshtriplett.orgJ josh@social.joshtriplett.org
                    Talking with them is good. Helping to educate them is good. Making it sound as if what they are doing is okay is *not*.

                    There is a big difference between offering an olive branch to people who *might* be productive contributors in the *future*, and telling them that what they're doing *now* is okay.

                    The best AI policy remains "do not contribute any LLM-written content, ever". You have published a post that makes it easier for people who oppose such policies to cite your "olive branch" when arguing against it, and it is not obvious from your post that you do not want that to happen.

                    I don't want to see people *abused* for using LLMs. I do want them to understand that what they're doing is not okay and not welcome and not a positive contribution.
                    G This user is from outside of this forum
                    G This user is from outside of this forum
                    glitzersachen@hachyderm.io
                    wrote sidst redigeret af
                    #135

                    @josh @silverwizard @ossguy @bkuhn @karen @wwahammy

                    As far as I am concerned people should be "abused" for shilling AI from a position where they really don't have any sufficient insight. Like middle management trying to push AI on reluctant software engineers with all the tricks in the book (for example tying performance review results to AI use). This behavior destroys trust and workplace culture. What do they think? That the engineers don't understand their own work mode? The hubris of management: "I'll tell you how you can work better. I know better how you can work better."

                    And this behavior needs to be called out.

                    josh@social.joshtriplett.orgJ 1 Reply Last reply
                    0
                    • davidgerard@circumstances.runD davidgerard@circumstances.run

                      @wwahammy @silverwizard @firefly_lightning @cwebber @ossguy yeah, "great question! come over to crime scene 2 for an answer perhaps!" has never been a good look.

                      it was presented as human written text. The human who signs their name to it should be able to answer text-based questions about it in written form.

                      ossguy@fedi.copyleft.orgO This user is from outside of this forum
                      ossguy@fedi.copyleft.orgO This user is from outside of this forum
                      ossguy@fedi.copyleft.org
                      wrote sidst redigeret af
                      #136

                      @davidgerard @wwahammy @silverwizard @firefly_lightning @cwebber Yes, which is why it's important to allow people to identify when they have used LLM/AI assistants to help. New contributors will see this is the norm, and then it will be easier to help them, because we'll know a bit about where any potential knowledge gaps might be coming from.

                      If we "ban" LLM/AI-assisted contributions, people will use them anyway but hide their use, which is a trickier problem to solve.

                      silverwizard@convenient.emailS theentity@social.treehouse.systemsT gulfie@mastodonapp.ukG 3 Replies Last reply
                      0
                      • G glitzersachen@hachyderm.io

                        @josh @silverwizard @ossguy @bkuhn @karen @wwahammy

                        As far as I am concerned people should be "abused" for shilling AI from a position where they really don't have any sufficient insight. Like middle management trying to push AI on reluctant software engineers with all the tricks in the book (for example tying performance review results to AI use). This behavior destroys trust and workplace culture. What do they think? That the engineers don't understand their own work mode? The hubris of management: "I'll tell you how you can work better. I know better how you can work better."

                        And this behavior needs to be called out.

                        josh@social.joshtriplett.orgJ This user is from outside of this forum
                        josh@social.joshtriplett.orgJ This user is from outside of this forum
                        josh@social.joshtriplett.org
                        wrote sidst redigeret af
                        #137
                        People shouldn't be *abused*, ever. If people are *shilling* AI and trying to force it on others, they might deserve some amount of shame and disapprobation. But nobody deserves abuse.
                        G 1 Reply Last reply
                        0
                        • bkuhn@fedi.copyleft.orgB bkuhn@fedi.copyleft.org

                          (2/5) … In https://sfconservancy.org/blog/2026/apr/15/eternal-november-generative-ai-llm/ ,
                          Denver's key points are: we *have* to (a) be open to *listening* to people who want to contribute #FOSS with #LLM-backed generative #AI systems, & (b) work collaboratively on a *plan* of how we can solve the current crisis.

                          Nothing ever got done politically that was good when both sides become more entrenched, refuse to even concede the other side has some valid points, & each say the other is the Enemy. …

                          Cc: @wwahammy @silverwizard @cwebber

                          #OpenSource

                          miss_rodent@girlcock.clubM This user is from outside of this forum
                          miss_rodent@girlcock.clubM This user is from outside of this forum
                          miss_rodent@girlcock.club
                          wrote sidst redigeret af
                          #138

                          @bkuhn For those who are acting in good faith, and willing to contribute in healthy ways - yes, it's absolutely worth while to talk to them, and try to get them to contribute in good ways. If you can, have time, are not drowning in so much slop that you can't tell who means well and who just needs to be blocked/banned, etc. - integrating people into the community is a lot of work, and a lot of people maintaining and making free software are already doing a lot of work for free as it is. (1/?)

                          miss_rodent@girlcock.clubM 1 Reply Last reply
                          0
                          • G glitzersachen@hachyderm.io

                            @kees @josh @silverwizard @ossguy @bkuhn @karen @wwahammy @xgranade

                            I suspect (as some scientists do as well) that this is not a cultural, but a neurological phenomenon. So I think this is really on a totally different level.

                            The skill erosion I am talking about has nothing to do AT ALL with critical thinking.

                            Best case it's just that if of two synaptic circuits (use the translation tool vs retrieve from memory) the one which wanted to activate, but then got not chosen, is actively deleted or weakened. My understanding is that this is how biological brains work / learn.

                            The worse alternative is that the output of LLMs has some yet not sufficiently described hidden quality which poisons neuronal networks that process them.

                            One hint in this direction is, that LLM models, who consume the production of other LLM models in training, collapse. That's a clear indicator that on some level LLM output is observably different from human language production, though we as humans have on average a hard time telling the difference.

                            And this is what I am talking about: Not a loss of cultural techniques or of learned skill by atrophy or not being taught anymore, but the poisoning of neuronal networks by input they cannot firewall because they have not evolved to recognize it as a hazard.

                            kees@hachyderm.ioK This user is from outside of this forum
                            kees@hachyderm.ioK This user is from outside of this forum
                            kees@hachyderm.io
                            wrote sidst redigeret af
                            #139

                            @glitzersachen @josh @silverwizard @ossguy @bkuhn @karen @wwahammy @xgranade

                            Right, yeah, this is why I've cautioned people about *how* they use LLMs. You've distilled it more clearly and lines up with my own intuition that reminds me about how human memory systems work: retrieval is effectively erasure, so "remembering" requires retrieval and storage. Research into treating PTSD (IIRC?) and such found that blocking storage (with drugs or EM) and then triggering recall would wipe memories. You're describing a potentially purely experiential way to do this, which is terrifying.

                            I feel like using an LLM can lead to a Dunning-Kruger like effect, in that you think you know what it did, but you don't. And this belief satisfies the need/instinct to learn/know what happened without having actually done so. (Reminds me of making a TODO list and now the Dopamine hit from that kills the need to actually *do* the list.)

                            kees@hachyderm.ioK G 2 Replies Last reply
                            0
                            • miss_rodent@girlcock.clubM miss_rodent@girlcock.club

                              @bkuhn For those who are acting in good faith, and willing to contribute in healthy ways - yes, it's absolutely worth while to talk to them, and try to get them to contribute in good ways. If you can, have time, are not drowning in so much slop that you can't tell who means well and who just needs to be blocked/banned, etc. - integrating people into the community is a lot of work, and a lot of people maintaining and making free software are already doing a lot of work for free as it is. (1/?)

                              miss_rodent@girlcock.clubM This user is from outside of this forum
                              miss_rodent@girlcock.clubM This user is from outside of this forum
                              miss_rodent@girlcock.club
                              wrote sidst redigeret af
                              #140

                              @bkuhn That does not mean that LLM-generated code, assets, or outputs can be allowed in free-software projects. Which is an important distinction.
                              If someone wants to contribute, that's great - point them to resources of how to do so, how to make submissions that can be accepted. If they won't contribute without using claude or whatever, then their contributions must be refused.
                              The ethical, environmental, public health, freedom/human rights, issues of LLMs as they exist are too severe (2/?)

                              miss_rodent@girlcock.clubM 1 Reply Last reply
                              0
                              • miss_rodent@girlcock.clubM miss_rodent@girlcock.club

                                @bkuhn That does not mean that LLM-generated code, assets, or outputs can be allowed in free-software projects. Which is an important distinction.
                                If someone wants to contribute, that's great - point them to resources of how to do so, how to make submissions that can be accepted. If they won't contribute without using claude or whatever, then their contributions must be refused.
                                The ethical, environmental, public health, freedom/human rights, issues of LLMs as they exist are too severe (2/?)

                                miss_rodent@girlcock.clubM This user is from outside of this forum
                                miss_rodent@girlcock.clubM This user is from outside of this forum
                                miss_rodent@girlcock.club
                                wrote sidst redigeret af
                                #141

                                @bkuhn to be acceptable in free software. I would go so far as say the free software definition should be ammended, to exclude any component generated by an LLM or similar generative program that is not:
                                - Purely deterministic OR
                                - Completely free, including all training data, weights, source code, training processes, etc. such that a user (with sufficient resources) could recreate the process from the ground up, in the same way that a user can re-compile GCC, & uses only (3/?)

                                miss_rodent@girlcock.clubM 1 Reply Last reply
                                0
                                • miss_rodent@girlcock.clubM miss_rodent@girlcock.club

                                  @bkuhn to be acceptable in free software. I would go so far as say the free software definition should be ammended, to exclude any component generated by an LLM or similar generative program that is not:
                                  - Purely deterministic OR
                                  - Completely free, including all training data, weights, source code, training processes, etc. such that a user (with sufficient resources) could recreate the process from the ground up, in the same way that a user can re-compile GCC, & uses only (3/?)

                                  miss_rodent@girlcock.clubM This user is from outside of this forum
                                  miss_rodent@girlcock.clubM This user is from outside of this forum
                                  miss_rodent@girlcock.club
                                  wrote sidst redigeret af
                                  #142

                                  @bkuhn Ethically-acquired (so, no DoSing an artists website for it, no ignoring a robots.txt to scrape it, etc) training data.

                                  This doesn't solve the *other* ethical problems, but, the nature of these generators is that without freedom in the full chain from data to output as a minimum, they should be excluded from the free software definition - in the same way inserting a binary blob into the linux kernel makes it at least partially non-free. The LLM is a non-free black-box without those(4/?)

                                  miss_rodent@girlcock.clubM 1 Reply Last reply
                                  0
                                  • miss_rodent@girlcock.clubM miss_rodent@girlcock.club

                                    @bkuhn Ethically-acquired (so, no DoSing an artists website for it, no ignoring a robots.txt to scrape it, etc) training data.

                                    This doesn't solve the *other* ethical problems, but, the nature of these generators is that without freedom in the full chain from data to output as a minimum, they should be excluded from the free software definition - in the same way inserting a binary blob into the linux kernel makes it at least partially non-free. The LLM is a non-free black-box without those(4/?)

                                    miss_rodent@girlcock.clubM This user is from outside of this forum
                                    miss_rodent@girlcock.clubM This user is from outside of this forum
                                    miss_rodent@girlcock.club
                                    wrote sidst redigeret af
                                    #143

                                    @bkuhn parts being as free as the source code, and including their outputs should be treated as a binary blob - since there is no way to investigate the process behind how it ended up there or was created. It can't even be meaningfully reverse-engineered in most cases.

                                    I'd also add to any project refusal of any generated content that carries the climate, freedom, rights, labour rights, etc. concerns as well - but at a minimum, the outputs of an LLM can not be considered free. (5/5)

                                    miss_rodent@girlcock.clubM 1 Reply Last reply
                                    0
                                    • miss_rodent@girlcock.clubM miss_rodent@girlcock.club

                                      @bkuhn parts being as free as the source code, and including their outputs should be treated as a binary blob - since there is no way to investigate the process behind how it ended up there or was created. It can't even be meaningfully reverse-engineered in most cases.

                                      I'd also add to any project refusal of any generated content that carries the climate, freedom, rights, labour rights, etc. concerns as well - but at a minimum, the outputs of an LLM can not be considered free. (5/5)

                                      miss_rodent@girlcock.clubM This user is from outside of this forum
                                      miss_rodent@girlcock.clubM This user is from outside of this forum
                                      miss_rodent@girlcock.club
                                      wrote sidst redigeret af
                                      #144

                                      @bkuhn The *people* coming in, if they mean well and will contribute in ethical ways, are fine, and worth welcoming in.
                                      But the LLMs themselves - the use of the tool - as it exists undermines the ethical core of the free software movement, and carries too many other ethical problems to be acceptable.
                                      (6/5)

                                      miss_rodent@girlcock.clubM 1 Reply Last reply
                                      0
                                      • miss_rodent@girlcock.clubM miss_rodent@girlcock.club

                                        @bkuhn The *people* coming in, if they mean well and will contribute in ethical ways, are fine, and worth welcoming in.
                                        But the LLMs themselves - the use of the tool - as it exists undermines the ethical core of the free software movement, and carries too many other ethical problems to be acceptable.
                                        (6/5)

                                        miss_rodent@girlcock.clubM This user is from outside of this forum
                                        miss_rodent@girlcock.clubM This user is from outside of this forum
                                        miss_rodent@girlcock.club
                                        wrote sidst redigeret af
                                        #145

                                        @bkuhn This leaves room for an ethical, actually free, version of the tech, should it appear at some point, which is a compromise, my instinct is that there can be no ethical version, but, I could be wrong.
                                        As-is though, the LLMs & Genrative systems in use are a black box - even 'open source' ones, if they do not also provide full access to the training data & methodology, including them in free software is no better than proprietary code. The definitions & licenses need to reflect this. (7/5)

                                        1 Reply Last reply
                                        0
                                        • kees@hachyderm.ioK kees@hachyderm.io

                                          @glitzersachen @josh @silverwizard @ossguy @bkuhn @karen @wwahammy @xgranade

                                          Right, yeah, this is why I've cautioned people about *how* they use LLMs. You've distilled it more clearly and lines up with my own intuition that reminds me about how human memory systems work: retrieval is effectively erasure, so "remembering" requires retrieval and storage. Research into treating PTSD (IIRC?) and such found that blocking storage (with drugs or EM) and then triggering recall would wipe memories. You're describing a potentially purely experiential way to do this, which is terrifying.

                                          I feel like using an LLM can lead to a Dunning-Kruger like effect, in that you think you know what it did, but you don't. And this belief satisfies the need/instinct to learn/know what happened without having actually done so. (Reminds me of making a TODO list and now the Dopamine hit from that kills the need to actually *do* the list.)

                                          kees@hachyderm.ioK This user is from outside of this forum
                                          kees@hachyderm.ioK This user is from outside of this forum
                                          kees@hachyderm.io
                                          wrote sidst redigeret af
                                          #146

                                          @glitzersachen @josh @silverwizard @ossguy @bkuhn @karen @wwahammy @xgranade

                                          I lump my experiences of software engineering use of LLMs into 3 modes:

                                          1) "work together", I am watching everything it is doing, reviewing every step, and contributing to the result in tandem. This doesn't feel to me like anything is being eroded on my end. But I'm also a deep sceptic of its output.

                                          2) "do the thing I know how to do for me", this is super dangerous, as I think I'm solving problems I am familiar with, but I didn't follow the results closely and I'm left with deep erosion of my comprehension of both problem and solution.

                                          3) "vibe coding", I have no idea what it is doing with a thing I don't know about and I know I have no idea what it is doing. This doesn't seem to erode anything. It does create a new problem for me, though, if the LLM can't solve some problem because also neither can I.

                                          I've felt #2 a few times, and I had the alarm bells in place to shift myself back to #1, which required doing full review and looking back through the reasoning and checking the work. The risk of being drawn into #2 is high given the sychophancy of the models, but I think my suspicion of it has helped avoid this a bit. 😅 (And perhaps I am more deluded than I think.)

                                          #3 I have done for educational/amusement purposes, but it's an uncommon mode for me because what's the point of creating a thing I don't understand and can't fix?

                                          ("I can quit any time!")

                                          1 Reply Last reply
                                          0
                                          Svar
                                          • Svar som emne
                                          Login for at svare
                                          • Ældste til nyeste
                                          • Nyeste til ældste
                                          • Most Votes


                                          • Log ind

                                          • Har du ikke en konto? Tilmeld

                                          • Login or register to search.
                                          Powered by NodeBB Contributors
                                          Graciously hosted by data.coop
                                          • First post
                                            Last post
                                          0
                                          • Hjem
                                          • Seneste
                                          • Etiketter
                                          • Populære
                                          • Verden
                                          • Bruger
                                          • Grupper