Skip to content
  • Hjem
  • Seneste
  • Etiketter
  • Populære
  • Verden
  • Bruger
  • Grupper
Temaer
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Kollaps
FARVEL BIG TECH
  1. Forside
  2. Ikke-kategoriseret
  3. This blogpost makes an astoundingly good case about LLMs I hadn't considered before.

This blogpost makes an astoundingly good case about LLMs I hadn't considered before.

Planlagt Fastgjort Låst Flyttet Ikke-kategoriseret
59 Indlæg 42 Posters 0 Visninger
  • Ældste til nyeste
  • Nyeste til ældste
  • Most Votes
Svar
  • Svar som emne
Login for at svare
Denne tråd er blevet slettet. Kun brugere med emne behandlings privilegier kan se den.
  • michiel@social.tchncs.deM michiel@social.tchncs.de

    @cwebber you calling it an 'astoundingly good case' makes me feel insightful in a way no LLM has been able to accomplish. I'm going to be insufferably smug for the rest of the day 🙂

    cwebber@social.coopC This user is from outside of this forum
    cwebber@social.coopC This user is from outside of this forum
    cwebber@social.coop
    wrote sidst redigeret af
    #36

    @michiel Haha, you deserve it! An angle I hadn't considered, it really shook me up and I spent a ton of time thinking about it since.

    1 Reply Last reply
    0
    • cwebber@social.coopC cwebber@social.coop

      This blogpost makes an astoundingly good case about LLMs I hadn't considered before. The collapse of public forums (like Stack Overflow) for programming answers coincides directly with the rise of programmers asking for answers from chatbots *directly*. Those debugging sessions become part of a training set that now *only private LLM corporations have access to*. This is something that "open models" seemingly can't easily fight. https://michiel.buddingh.eu/enclosure-feedback-loop

      datarama@hachyderm.ioD This user is from outside of this forum
      datarama@hachyderm.ioD This user is from outside of this forum
      datarama@hachyderm.io
      wrote sidst redigeret af
      #37

      @cwebber I've been saying this for a while. Bubble or not, our profession (and/or vocation, if you prefer) is screwed.

      cwebber@social.coopC 1 Reply Last reply
      0
      • cwebber@social.coopC cwebber@social.coop

        This blogpost makes an astoundingly good case about LLMs I hadn't considered before. The collapse of public forums (like Stack Overflow) for programming answers coincides directly with the rise of programmers asking for answers from chatbots *directly*. Those debugging sessions become part of a training set that now *only private LLM corporations have access to*. This is something that "open models" seemingly can't easily fight. https://michiel.buddingh.eu/enclosure-feedback-loop

        twobiscuits@graz.socialT This user is from outside of this forum
        twobiscuits@graz.socialT This user is from outside of this forum
        twobiscuits@graz.social
        wrote sidst redigeret af
        #38

        @cwebber as in many other fields, we have to have real communities who care about stuff.

        1 Reply Last reply
        0
        • datarama@hachyderm.ioD datarama@hachyderm.io

          @cwebber I've been saying this for a while. Bubble or not, our profession (and/or vocation, if you prefer) is screwed.

          cwebber@social.coopC This user is from outside of this forum
          cwebber@social.coopC This user is from outside of this forum
          cwebber@social.coop
          wrote sidst redigeret af
          #39

          @datarama Possibly, though I worry less about professions/vocations than I do about user empowerment. I have long assumed that some day programmer salaries would be unsustainable.

          Of course the irony is that many people are shilling LLM services as being empowerment systems. I see them as the opposite. Open, community developed LLMs could be, but LLM-as-a-service corporations are definitively not.

          datarama@hachyderm.ioD 2 Replies Last reply
          0
          • cwebber@social.coopC cwebber@social.coop

            @datarama Possibly, though I worry less about professions/vocations than I do about user empowerment. I have long assumed that some day programmer salaries would be unsustainable.

            Of course the irony is that many people are shilling LLM services as being empowerment systems. I see them as the opposite. Open, community developed LLMs could be, but LLM-as-a-service corporations are definitively not.

            datarama@hachyderm.ioD This user is from outside of this forum
            datarama@hachyderm.ioD This user is from outside of this forum
            datarama@hachyderm.io
            wrote sidst redigeret af
            #40

            @cwebber By vocation, I also mean "people who like to write software".

            If I lost my job but still had that, I'm sure I could become a happy store clerk or train driver who hacked on community software in my free time. But in AI Hell, we can't even have that. My option is to become a miserable store clerk or train driver (until that too is automated away) who consumes AI-generated slop forever. And that is what is coming for all of us - current-day programmers are just going to get there first.

            (Incidentally, I make less than a third of what people on the internet tell me American software developers with my level of experience do - but I'm no more or less screwed than they are.)

            randomgeek@masto.hackers.townR 1 Reply Last reply
            0
            • cwebber@social.coopC cwebber@social.coop

              @datarama Possibly, though I worry less about professions/vocations than I do about user empowerment. I have long assumed that some day programmer salaries would be unsustainable.

              Of course the irony is that many people are shilling LLM services as being empowerment systems. I see them as the opposite. Open, community developed LLMs could be, but LLM-as-a-service corporations are definitively not.

              datarama@hachyderm.ioD This user is from outside of this forum
              datarama@hachyderm.ioD This user is from outside of this forum
              datarama@hachyderm.io
              wrote sidst redigeret af
              #41

              @cwebber And the problem is, LLM development is *extremely* capital-intensive. Unless you have a "community" of billionaires, it's going to be very hard to make anything that can compete with the hyperscalers.

              1 Reply Last reply
              0
              • datarama@hachyderm.ioD datarama@hachyderm.io

                @cwebber By vocation, I also mean "people who like to write software".

                If I lost my job but still had that, I'm sure I could become a happy store clerk or train driver who hacked on community software in my free time. But in AI Hell, we can't even have that. My option is to become a miserable store clerk or train driver (until that too is automated away) who consumes AI-generated slop forever. And that is what is coming for all of us - current-day programmers are just going to get there first.

                (Incidentally, I make less than a third of what people on the internet tell me American software developers with my level of experience do - but I'm no more or less screwed than they are.)

                randomgeek@masto.hackers.townR This user is from outside of this forum
                randomgeek@masto.hackers.townR This user is from outside of this forum
                randomgeek@masto.hackers.town
                wrote sidst redigeret af
                #42

                @datarama @cwebber I can attest that it's still possible to hack on free software in your spare time if you lose the tech job, but you get a heck of a lot less free time to do it in. And a heck of a lot less energy to do it with. All against a billionaire-induced media backdrop of your primary interest now being irrelevant, which is demoralizing.

                But if you can find the time and maintain the energy, there is still a community even more stubborn than in the "GPL is a cancer" days.

                datarama@hachyderm.ioD 1 Reply Last reply
                0
                • randomgeek@masto.hackers.townR randomgeek@masto.hackers.town

                  @datarama @cwebber I can attest that it's still possible to hack on free software in your spare time if you lose the tech job, but you get a heck of a lot less free time to do it in. And a heck of a lot less energy to do it with. All against a billionaire-induced media backdrop of your primary interest now being irrelevant, which is demoralizing.

                  But if you can find the time and maintain the energy, there is still a community even more stubborn than in the "GPL is a cancer" days.

                  datarama@hachyderm.ioD This user is from outside of this forum
                  datarama@hachyderm.ioD This user is from outside of this forum
                  datarama@hachyderm.io
                  wrote sidst redigeret af
                  #43

                  @randomgeek @cwebber It's *possible*, of course, but it all feels rather pointless now.

                  And everything you make and share freely is appropriated to improve the Immiseration Machine.

                  randomgeek@masto.hackers.townR 1 Reply Last reply
                  0
                  • martijn@scholar.socialM martijn@scholar.social

                    @cwebber but also, as uninviting as the stack overflow culture may have been, the moderators were there to try to get people to ask better questions. I doubt llms will handle things like x/y problem issues, so to me it seems things will get worse for people able/willing to pay as well.

                    mbpaz@mas.toM This user is from outside of this forum
                    mbpaz@mas.toM This user is from outside of this forum
                    mbpaz@mas.to
                    wrote sidst redigeret af
                    #44

                    @martijn @cwebber IMHO stackoverflow may have been toxic, but it was a sort of forum with low friction access (easy to search, easy to ask, easy to reply) where you interacted WITH PEOPLE.

                    People is key. I remember names from the linux-kernel list in the mid-90s - I joined Mastodon in 2022 and found that same people here.

                    Whatever site or forum or network or anything we build, I want to read from people, not bots.

                    1 Reply Last reply
                    0
                    • datarama@hachyderm.ioD datarama@hachyderm.io

                      @randomgeek @cwebber It's *possible*, of course, but it all feels rather pointless now.

                      And everything you make and share freely is appropriated to improve the Immiseration Machine.

                      randomgeek@masto.hackers.townR This user is from outside of this forum
                      randomgeek@masto.hackers.townR This user is from outside of this forum
                      randomgeek@masto.hackers.town
                      wrote sidst redigeret af
                      #45

                      @datarama @cwebber Maybe it's the Finnish ancestry. Maybe it's the autistic tendencies. Maybe that's a redundant assertion.

                      Regardless, I gotta keep doing the right thing even if it feels pointless. And if it feels pointless, I'm gonna do the right thing even harder.

                      datarama@hachyderm.ioD xavier@infosec.exchangeX 2 Replies Last reply
                      0
                      • randomgeek@masto.hackers.townR randomgeek@masto.hackers.town

                        @datarama @cwebber Maybe it's the Finnish ancestry. Maybe it's the autistic tendencies. Maybe that's a redundant assertion.

                        Regardless, I gotta keep doing the right thing even if it feels pointless. And if it feels pointless, I'm gonna do the right thing even harder.

                        datarama@hachyderm.ioD This user is from outside of this forum
                        datarama@hachyderm.ioD This user is from outside of this forum
                        datarama@hachyderm.io
                        wrote sidst redigeret af
                        #46

                        @randomgeek @cwebber I'm also autistic. (though I'm Danish, so the *least* famously crazy kind of Scandinavian. 😜 )

                        In the beginning of all this, I thought and felt much the same. Now I just feel drained and defeated.

                        Because yes, the struggle itself is enough to fill a human heart and we must imagine Sisyphus happy. But it sucks to be Sisyphus when someone put up a ski lift next to him.

                        1 Reply Last reply
                        0
                        • cwebber@social.coopC cwebber@social.coop

                          This blogpost makes an astoundingly good case about LLMs I hadn't considered before. The collapse of public forums (like Stack Overflow) for programming answers coincides directly with the rise of programmers asking for answers from chatbots *directly*. Those debugging sessions become part of a training set that now *only private LLM corporations have access to*. This is something that "open models" seemingly can't easily fight. https://michiel.buddingh.eu/enclosure-feedback-loop

                          matsuzine@hachyderm.ioM This user is from outside of this forum
                          matsuzine@hachyderm.ioM This user is from outside of this forum
                          matsuzine@hachyderm.io
                          wrote sidst redigeret af
                          #47

                          @cwebber I think there is a flaw with the theory that big AI can use this shift from forum to chatbot to train new models. The thing that makes Stack Overflow valuable is not the question but having an expert(s) provide an answer, and a mechanism for others to add weight to it being correct.

                          Interactions with LLMs really don't have the same feedback loop. They collect the questions from the users, but there is no expert to provide the answer to train from. I suppose there's some training data there, but not nearly as direct as what was originally scraped from SO.

                          I suspect training future models is going to be much more challenging.

                          tonyangelo@mspsocial.netT 1 Reply Last reply
                          0
                          • cwebber@social.coopC cwebber@social.coop

                            This blogpost makes an astoundingly good case about LLMs I hadn't considered before. The collapse of public forums (like Stack Overflow) for programming answers coincides directly with the rise of programmers asking for answers from chatbots *directly*. Those debugging sessions become part of a training set that now *only private LLM corporations have access to*. This is something that "open models" seemingly can't easily fight. https://michiel.buddingh.eu/enclosure-feedback-loop

                            th@social.v.stT This user is from outside of this forum
                            th@social.v.stT This user is from outside of this forum
                            th@social.v.st
                            wrote sidst redigeret af
                            #48

                            @cwebber yet another externality for the bot lickers to ignore when they say "ethical and environmental issue aside..." and praise the occasionally useful slop that the stochastic slotmachine gives them as they burn billions of tokens in gas town.

                            1 Reply Last reply
                            0
                            • cwebber@social.coopC cwebber@social.coop

                              This blogpost makes an astoundingly good case about LLMs I hadn't considered before. The collapse of public forums (like Stack Overflow) for programming answers coincides directly with the rise of programmers asking for answers from chatbots *directly*. Those debugging sessions become part of a training set that now *only private LLM corporations have access to*. This is something that "open models" seemingly can't easily fight. https://michiel.buddingh.eu/enclosure-feedback-loop

                              tiotasram@kolektiva.socialT This user is from outside of this forum
                              tiotasram@kolektiva.socialT This user is from outside of this forum
                              tiotasram@kolektiva.social
                              wrote sidst redigeret af
                              #49

                              @cwebber I think this is clearly right about enclosure, but wrong about there being a positive side of the loop that helps make LLMs better. When people ask an LLM for help, it just regurgitates old answers, it can't generate new ones. This generates training data about what questions people have, but does not generate training data about solutions except in rare cases where the user figures out their issue themselves and chats about the solution with the agent. The human experts answering the questions on SO part of entirely missing from the LLM interaction, unless the solution was *already* in the training data.

                              dalias@hachyderm.ioD 1 Reply Last reply
                              0
                              • cwebber@social.coopC cwebber@social.coop

                                This blogpost makes an astoundingly good case about LLMs I hadn't considered before. The collapse of public forums (like Stack Overflow) for programming answers coincides directly with the rise of programmers asking for answers from chatbots *directly*. Those debugging sessions become part of a training set that now *only private LLM corporations have access to*. This is something that "open models" seemingly can't easily fight. https://michiel.buddingh.eu/enclosure-feedback-loop

                                miki@dragonscave.spaceM This user is from outside of this forum
                                miki@dragonscave.spaceM This user is from outside of this forum
                                miki@dragonscave.space
                                wrote sidst redigeret af
                                #50

                                @cwebber This goes much, much wider than programming and LLMs.

                                In general, the open source world looks with disdain at all kinds of automated feedback collection mechanisms, which the Silicon Valley Venture Capital tech ecosystem has wholeheartedly embraced. OSS is still stuck in the 1990s mindset of "if there's a problem, somebody will report this to us", and That... just isn't true.

                                What we're stuck with is OSS solutions with inferrior user experiences which nobody wants to use, instead of a compromise where OSS software collects more data than some people would have liked, but that software actually has some users and makes a difference in the world.

                                To be fair, there are some good arguments against this (it's much easier to protect user privacy if the only contributors to your code are employees with background checks), but that doesn't make this less of a problem.

                                dalias@hachyderm.ioD 1 Reply Last reply
                                0
                                • cmthiede@social.vivaldi.netC cmthiede@social.vivaldi.net

                                  @mahadevank @cwebber Forget trying to explain that. The "experts" at Davos laid it out for everyone. Yet, somehow they're still optimistic that one entity dominating all others, essentially destroying competition, will bring forth a world of opportunities. It's an all out war, and anyone that doesn't have the resources to insert XYZ's brain into their stack, is just a foot soldier for those that do.

                                  mahadevank@mastodon.socialM This user is from outside of this forum
                                  mahadevank@mastodon.socialM This user is from outside of this forum
                                  mahadevank@mastodon.social
                                  wrote sidst redigeret af
                                  #51

                                  @cmthiede @cwebber ah but the world and nature don't work this way - I mean, we arrived at these systems after realizing that the tyrannical and control-driven systems of yesteryears were never stable.

                                  The Imperials of Davos may think this way, but that's never how it comes to pass. Let them enjoy their rather small window of opportunity while it lasts.

                                  cmthiede@social.vivaldi.netC 1 Reply Last reply
                                  0
                                  • mahadevank@mastodon.socialM mahadevank@mastodon.social

                                    @cmthiede @cwebber ah but the world and nature don't work this way - I mean, we arrived at these systems after realizing that the tyrannical and control-driven systems of yesteryears were never stable.

                                    The Imperials of Davos may think this way, but that's never how it comes to pass. Let them enjoy their rather small window of opportunity while it lasts.

                                    cmthiede@social.vivaldi.netC This user is from outside of this forum
                                    cmthiede@social.vivaldi.netC This user is from outside of this forum
                                    cmthiede@social.vivaldi.net
                                    wrote sidst redigeret af
                                    #52

                                    @mahadevank @cwebber I suppose you're right. I've visited spectacular pyramids of civilizations past, these "hyperscalers" are just another chapter in the long book of history.

                                    1 Reply Last reply
                                    0
                                    • miki@dragonscave.spaceM miki@dragonscave.space

                                      @cwebber This goes much, much wider than programming and LLMs.

                                      In general, the open source world looks with disdain at all kinds of automated feedback collection mechanisms, which the Silicon Valley Venture Capital tech ecosystem has wholeheartedly embraced. OSS is still stuck in the 1990s mindset of "if there's a problem, somebody will report this to us", and That... just isn't true.

                                      What we're stuck with is OSS solutions with inferrior user experiences which nobody wants to use, instead of a compromise where OSS software collects more data than some people would have liked, but that software actually has some users and makes a difference in the world.

                                      To be fair, there are some good arguments against this (it's much easier to protect user privacy if the only contributors to your code are employees with background checks), but that doesn't make this less of a problem.

                                      dalias@hachyderm.ioD This user is from outside of this forum
                                      dalias@hachyderm.ioD This user is from outside of this forum
                                      dalias@hachyderm.io
                                      wrote sidst redigeret af
                                      #53

                                      @miki @cwebber I don't care if these things let them gain market share. They are absolutely unethical and something that must not be copied. If it gives them an unfair competitive advantage the answer is not to copy that but to destroy them.

                                      1 Reply Last reply
                                      0
                                      • tiotasram@kolektiva.socialT tiotasram@kolektiva.social

                                        @cwebber I think this is clearly right about enclosure, but wrong about there being a positive side of the loop that helps make LLMs better. When people ask an LLM for help, it just regurgitates old answers, it can't generate new ones. This generates training data about what questions people have, but does not generate training data about solutions except in rare cases where the user figures out their issue themselves and chats about the solution with the agent. The human experts answering the questions on SO part of entirely missing from the LLM interaction, unless the solution was *already* in the training data.

                                        dalias@hachyderm.ioD This user is from outside of this forum
                                        dalias@hachyderm.ioD This user is from outside of this forum
                                        dalias@hachyderm.io
                                        wrote sidst redigeret af
                                        #54

                                        @tiotasram @cwebber The claim seems to be that frustrated users who do know the right answers argue with the chatbots, giving them new training material. If true this suggests a fun attack... 😈

                                        1 Reply Last reply
                                        0
                                        • matsuzine@hachyderm.ioM matsuzine@hachyderm.io

                                          @cwebber I think there is a flaw with the theory that big AI can use this shift from forum to chatbot to train new models. The thing that makes Stack Overflow valuable is not the question but having an expert(s) provide an answer, and a mechanism for others to add weight to it being correct.

                                          Interactions with LLMs really don't have the same feedback loop. They collect the questions from the users, but there is no expert to provide the answer to train from. I suppose there's some training data there, but not nearly as direct as what was originally scraped from SO.

                                          I suspect training future models is going to be much more challenging.

                                          tonyangelo@mspsocial.netT This user is from outside of this forum
                                          tonyangelo@mspsocial.netT This user is from outside of this forum
                                          tonyangelo@mspsocial.net
                                          wrote sidst redigeret af
                                          #55

                                          @matsuzine @cwebber it’s like a snake eating its own tail, eventually there is nothing left to eat.

                                          1 Reply Last reply
                                          0
                                          Svar
                                          • Svar som emne
                                          Login for at svare
                                          • Ældste til nyeste
                                          • Nyeste til ældste
                                          • Most Votes


                                          • Log ind

                                          • Har du ikke en konto? Tilmeld

                                          • Login or register to search.
                                          Powered by NodeBB Contributors
                                          Graciously hosted by data.coop
                                          • First post
                                            Last post
                                          0
                                          • Hjem
                                          • Seneste
                                          • Etiketter
                                          • Populære
                                          • Verden
                                          • Bruger
                                          • Grupper