Skip to content
  • Hjem
  • Seneste
  • Etiketter
  • Populære
  • Verden
  • Bruger
  • Grupper
Temaer
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Kollaps
FARVEL BIG TECH
  1. Forside
  2. Ikke-kategoriseret
  3. Theory (which is mine) (and what it is, too): Enshittification has led directly to acceptance of LLMs, because the public is already used to software that is unfit for purpose.

Theory (which is mine) (and what it is, too): Enshittification has led directly to acceptance of LLMs, because the public is already used to software that is unfit for purpose.

Planlagt Fastgjort Låst Flyttet Ikke-kategoriseret
7 Indlæg 7 Posters 0 Visninger
  • Ældste til nyeste
  • Nyeste til ældste
  • Most Votes
Svar
  • Svar som emne
Login for at svare
Denne tråd er blevet slettet. Kun brugere med emne behandlings privilegier kan se den.
  • wendynather@infosec.exchangeW This user is from outside of this forum
    wendynather@infosec.exchangeW This user is from outside of this forum
    wendynather@infosec.exchange
    wrote sidst redigeret af
    #1

    Theory (which is mine) (and what it is, too): Enshittification has led directly to acceptance of LLMs, because the public is already used to software that is unfit for purpose.

    @pluralistic

    mangochutney@social.lolM forthy42@mastodon.net2o.deF halbgarheiten@sciences.socialH mro@digitalcourage.socialM thegreatllama@kolektiva.socialT 5 Replies Last reply
    1
    0
    • wendynather@infosec.exchangeW wendynather@infosec.exchange

      Theory (which is mine) (and what it is, too): Enshittification has led directly to acceptance of LLMs, because the public is already used to software that is unfit for purpose.

      @pluralistic

      mangochutney@social.lolM This user is from outside of this forum
      mangochutney@social.lolM This user is from outside of this forum
      mangochutney@social.lol
      wrote sidst redigeret af
      #2

      @wendynather @pluralistic and it simply overshadows all of the things this technology can be useful for.

      1 Reply Last reply
      0
      • wendynather@infosec.exchangeW wendynather@infosec.exchange

        Theory (which is mine) (and what it is, too): Enshittification has led directly to acceptance of LLMs, because the public is already used to software that is unfit for purpose.

        @pluralistic

        forthy42@mastodon.net2o.deF This user is from outside of this forum
        forthy42@mastodon.net2o.deF This user is from outside of this forum
        forthy42@mastodon.net2o.de
        wrote sidst redigeret af
        #3

        @wendynather @pluralistic I suppose there's something worse here: if you post a question on some social network, you get offensive answers, misleading and wrong answers and that all with natural intelligence (or stupidity), and LLM, despite the problems it has, beats all of that.

        There might be better humans out there, but the enshittification of those plattforms makes sure you meet the assholes first.

        audioflyer79@mstdn.socialA 1 Reply Last reply
        0
        • wendynather@infosec.exchangeW wendynather@infosec.exchange

          Theory (which is mine) (and what it is, too): Enshittification has led directly to acceptance of LLMs, because the public is already used to software that is unfit for purpose.

          @pluralistic

          halbgarheiten@sciences.socialH This user is from outside of this forum
          halbgarheiten@sciences.socialH This user is from outside of this forum
          halbgarheiten@sciences.social
          wrote sidst redigeret af
          #4

          @wendynather @pluralistic and: already used to work in bullshit jobs

          1 Reply Last reply
          0
          • wendynather@infosec.exchangeW wendynather@infosec.exchange

            Theory (which is mine) (and what it is, too): Enshittification has led directly to acceptance of LLMs, because the public is already used to software that is unfit for purpose.

            @pluralistic

            mro@digitalcourage.socialM This user is from outside of this forum
            mro@digitalcourage.socialM This user is from outside of this forum
            mro@digitalcourage.social
            wrote sidst redigeret af
            #5

            @wendynather @pluralistic
            💯

            1 Reply Last reply
            0
            • wendynather@infosec.exchangeW wendynather@infosec.exchange

              Theory (which is mine) (and what it is, too): Enshittification has led directly to acceptance of LLMs, because the public is already used to software that is unfit for purpose.

              @pluralistic

              thegreatllama@kolektiva.socialT This user is from outside of this forum
              thegreatllama@kolektiva.socialT This user is from outside of this forum
              thegreatllama@kolektiva.social
              wrote sidst redigeret af
              #6

              @wendynather @pluralistic
              Yep. Search engines too: people used to search for an answer, now they ask an LLM. They didn't start doing it because they wanted a machine to think for them, they started because they stopped getting useful search results.

              1 Reply Last reply
              0
              • forthy42@mastodon.net2o.deF forthy42@mastodon.net2o.de

                @wendynather @pluralistic I suppose there's something worse here: if you post a question on some social network, you get offensive answers, misleading and wrong answers and that all with natural intelligence (or stupidity), and LLM, despite the problems it has, beats all of that.

                There might be better humans out there, but the enshittification of those plattforms makes sure you meet the assholes first.

                audioflyer79@mstdn.socialA This user is from outside of this forum
                audioflyer79@mstdn.socialA This user is from outside of this forum
                audioflyer79@mstdn.social
                wrote sidst redigeret af
                #7

                To be fair, any human that you ask a question is going to give you wrong answers SOME of the time, often with complete certainty. To expect that an LLM will always get it right is
                unrealistic and misguided. Trust, but verify if the answer is important. Not sure why we would expect any computer to come up with the right answer every time.

                1 Reply Last reply
                0
                • jwcph@helvede.netJ jwcph@helvede.net shared this topic
                Svar
                • Svar som emne
                Login for at svare
                • Ældste til nyeste
                • Nyeste til ældste
                • Most Votes


                • Log ind

                • Har du ikke en konto? Tilmeld

                • Login or register to search.
                Powered by NodeBB Contributors
                Graciously hosted by data.coop
                • First post
                  Last post
                0
                • Hjem
                • Seneste
                • Etiketter
                • Populære
                • Verden
                • Bruger
                • Grupper