Skip to content
  • Hjem
  • Seneste
  • Etiketter
  • Populære
  • Verden
  • Bruger
  • Grupper
Temaer
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Kollaps
FARVEL BIG TECH
  1. Forside
  2. Ikke-kategoriseret
  3. The LLM discourse on the Fediverse has really irked me the last few days.

The LLM discourse on the Fediverse has really irked me the last few days.

Planlagt Fastgjort Låst Flyttet Ikke-kategoriseret
125 Indlæg 73 Posters 0 Visninger
  • Ældste til nyeste
  • Nyeste til ældste
  • Most Votes
Svar
  • Svar som emne
Login for at svare
Denne tråd er blevet slettet. Kun brugere med emne behandlings privilegier kan se den.
  • reading_recluse@c.imR reading_recluse@c.im

    The LLM discourse on the Fediverse has really irked me the last few days.

    Refusing to read writing made with the use of LLMs and refusing to give time to writers who use, promote or justify the use of LLMs is not purity culture, it's a boycott. It's a political act of withdrawing my time, resources and support for something that I find deeply morally wrong. It's protest. I have a choice and I refuse.

    LLMs are exploitative, destructive, biased, mediocre parroting machines. Using them has a negative impact on the climate, the arts, the quality of the internet, the job market, the economy, the accessibility of electronics, even on skill development, creativity and mental health. LLMs are made and trained on the unpaid labour of millions -if not billions- of people who didn't consent. Their generic output litter the path to finding anything by true human creators.

    Wherever I can, for as long as I can, I reject LLMs and anything that is related to them. I'm boycotting.

    rubyjones@wandering.shopR This user is from outside of this forum
    rubyjones@wandering.shopR This user is from outside of this forum
    rubyjones@wandering.shop
    wrote sidst redigeret af
    #25

    @reading_recluse 💯

    1 Reply Last reply
    0
    • dynamite_ready@mastodon.gamedev.placeD dynamite_ready@mastodon.gamedev.place

      @lproven @xs4me2 @reading_recluse

      For generating content of any kind, I think there's a reckoning to come. Especially in the 'agentic' space.

      But for Information Retrieval, LLMs are great, tbh... I'd argue that also includes those far out stories about prompts leading to new scientific theories, or mathematical proofs.

      The tool is a big part of that, but it's the user ('operator'?) that writes the prompts, guides the outcomes, and validates them.

      That's a worthy advance.

      firlefanz@writing.exchangeF This user is from outside of this forum
      firlefanz@writing.exchangeF This user is from outside of this forum
      firlefanz@writing.exchange
      wrote sidst redigeret af
      #26

      @dynamite_ready

      The problem is that LLMs just make things up. There are no new discovers, there is no accurate information retrieval. But people don't notice, because they lack the expertise, they lack the ability to check.

      LLMs cannot be trusted with anything. They are a sheer waste of our world's resources.

      @lproven @xs4me2 @reading_recluse

      xs4me2@mastodon.socialX 1 Reply Last reply
      0
      • reading_recluse@c.imR reading_recluse@c.im

        The LLM discourse on the Fediverse has really irked me the last few days.

        Refusing to read writing made with the use of LLMs and refusing to give time to writers who use, promote or justify the use of LLMs is not purity culture, it's a boycott. It's a political act of withdrawing my time, resources and support for something that I find deeply morally wrong. It's protest. I have a choice and I refuse.

        LLMs are exploitative, destructive, biased, mediocre parroting machines. Using them has a negative impact on the climate, the arts, the quality of the internet, the job market, the economy, the accessibility of electronics, even on skill development, creativity and mental health. LLMs are made and trained on the unpaid labour of millions -if not billions- of people who didn't consent. Their generic output litter the path to finding anything by true human creators.

        Wherever I can, for as long as I can, I reject LLMs and anything that is related to them. I'm boycotting.

        davidcoronel@social.laia.arD This user is from outside of this forum
        davidcoronel@social.laia.arD This user is from outside of this forum
        davidcoronel@social.laia.ar
        wrote sidst redigeret af
        #27

        @reading_recluse I can relate to your stance. But ultimately decide to take action demanding attribution and compensation for the unpaid labor and externalities that goes into LLMs development. Have you considered engaging from that perspective?

        1 Reply Last reply
        0
        • reading_recluse@c.imR reading_recluse@c.im

          The LLM discourse on the Fediverse has really irked me the last few days.

          Refusing to read writing made with the use of LLMs and refusing to give time to writers who use, promote or justify the use of LLMs is not purity culture, it's a boycott. It's a political act of withdrawing my time, resources and support for something that I find deeply morally wrong. It's protest. I have a choice and I refuse.

          LLMs are exploitative, destructive, biased, mediocre parroting machines. Using them has a negative impact on the climate, the arts, the quality of the internet, the job market, the economy, the accessibility of electronics, even on skill development, creativity and mental health. LLMs are made and trained on the unpaid labour of millions -if not billions- of people who didn't consent. Their generic output litter the path to finding anything by true human creators.

          Wherever I can, for as long as I can, I reject LLMs and anything that is related to them. I'm boycotting.

          adrianww@mastodon.scotA This user is from outside of this forum
          adrianww@mastodon.scotA This user is from outside of this forum
          adrianww@mastodon.scot
          wrote sidst redigeret af
          #28

          @reading_recluse Absolutely. LLMs are the biggest, most bloody useless con ever invented by the vacuous arseholes in charge of the tech industry.

          The extra annoying thing is that there are other potential approaches to AI out there that are ultimately likely to be more useful, less destructive and work better (e.g. some expert systems, decision support systems, etc.) But so many folks are just playing with probabilistic horseshit generators instead.

          violetmadder@kolektiva.socialV 1 Reply Last reply
          0
          • lproven@social.vivaldi.netL lproven@social.vivaldi.net

            @xs4me2 @reading_recluse

            > can be useful to digest and explore information at great speed

            Nope. Still wrong. This is in fact something they are extremely and *dangerously* bad at.

            xs4me2@mastodon.socialX This user is from outside of this forum
            xs4me2@mastodon.socialX This user is from outside of this forum
            xs4me2@mastodon.social
            wrote sidst redigeret af
            #29

            @lproven @reading_recluse

            Well as I said it is a tool, a hammer is not right or wrong. It can be used right or wrong.

            As a domain expert, I use LLM in my work, but I will always judge and validate if it is right... I have indeed seen colleagues use it out of their zone of work, where I had to tell them yes this is right what LLM said, but not in this context. The real problem is LLM will never tell you context or probability of it telling you something is correct.

            1 Reply Last reply
            0
            • firlefanz@writing.exchangeF firlefanz@writing.exchange

              @dynamite_ready

              The problem is that LLMs just make things up. There are no new discovers, there is no accurate information retrieval. But people don't notice, because they lack the expertise, they lack the ability to check.

              LLMs cannot be trusted with anything. They are a sheer waste of our world's resources.

              @lproven @xs4me2 @reading_recluse

              xs4me2@mastodon.socialX This user is from outside of this forum
              xs4me2@mastodon.socialX This user is from outside of this forum
              xs4me2@mastodon.social
              wrote sidst redigeret af
              #30

              @Firlefanz @dynamite_ready @lproven @reading_recluse

              Well, yes and no, see my reply below:

              https://mastodon.social/@xs4me2/116114648661782873

              1 Reply Last reply
              0
              • reading_recluse@c.imR reading_recluse@c.im

                The LLM discourse on the Fediverse has really irked me the last few days.

                Refusing to read writing made with the use of LLMs and refusing to give time to writers who use, promote or justify the use of LLMs is not purity culture, it's a boycott. It's a political act of withdrawing my time, resources and support for something that I find deeply morally wrong. It's protest. I have a choice and I refuse.

                LLMs are exploitative, destructive, biased, mediocre parroting machines. Using them has a negative impact on the climate, the arts, the quality of the internet, the job market, the economy, the accessibility of electronics, even on skill development, creativity and mental health. LLMs are made and trained on the unpaid labour of millions -if not billions- of people who didn't consent. Their generic output litter the path to finding anything by true human creators.

                Wherever I can, for as long as I can, I reject LLMs and anything that is related to them. I'm boycotting.

                L This user is from outside of this forum
                L This user is from outside of this forum
                luc0x61@mastodon.gamedev.place
                wrote sidst redigeret af
                #31

                @reading_recluse I do agree, but I'd like to add something. After all, the manipulative scheme on users isn't much different from what has happened in the last twentysomething years. The companies behind it are still the same ones, almost all of them were born less than three decades ago.

                LLMs have just refined the decoy, polished the deceptive honey-pot.

                1 Reply Last reply
                0
                • reading_recluse@c.imR reading_recluse@c.im

                  The LLM discourse on the Fediverse has really irked me the last few days.

                  Refusing to read writing made with the use of LLMs and refusing to give time to writers who use, promote or justify the use of LLMs is not purity culture, it's a boycott. It's a political act of withdrawing my time, resources and support for something that I find deeply morally wrong. It's protest. I have a choice and I refuse.

                  LLMs are exploitative, destructive, biased, mediocre parroting machines. Using them has a negative impact on the climate, the arts, the quality of the internet, the job market, the economy, the accessibility of electronics, even on skill development, creativity and mental health. LLMs are made and trained on the unpaid labour of millions -if not billions- of people who didn't consent. Their generic output litter the path to finding anything by true human creators.

                  Wherever I can, for as long as I can, I reject LLMs and anything that is related to them. I'm boycotting.

                  C This user is from outside of this forum
                  C This user is from outside of this forum
                  captaincoffee@freeradical.zone
                  wrote sidst redigeret af
                  #32

                  @reading_recluse that's a great point, you're right to point that out, and you've touched on a classic issue between humanity and LLMs

                  1 Reply Last reply
                  0
                  • dynamite_ready@mastodon.gamedev.placeD dynamite_ready@mastodon.gamedev.place

                    @lproven @xs4me2 @reading_recluse

                    For generating content of any kind, I think there's a reckoning to come. Especially in the 'agentic' space.

                    But for Information Retrieval, LLMs are great, tbh... I'd argue that also includes those far out stories about prompts leading to new scientific theories, or mathematical proofs.

                    The tool is a big part of that, but it's the user ('operator'?) that writes the prompts, guides the outcomes, and validates them.

                    That's a worthy advance.

                    xs4me2@mastodon.socialX This user is from outside of this forum
                    xs4me2@mastodon.socialX This user is from outside of this forum
                    xs4me2@mastodon.social
                    wrote sidst redigeret af
                    #33

                    @dynamite_ready @lproven @reading_recluse

                    It is the user and their skills indeed. A hammer can be used skillfully or wrong...

                    lproven@social.vivaldi.netL 1 Reply Last reply
                    0
                    • reading_recluse@c.imR reading_recluse@c.im

                      The LLM discourse on the Fediverse has really irked me the last few days.

                      Refusing to read writing made with the use of LLMs and refusing to give time to writers who use, promote or justify the use of LLMs is not purity culture, it's a boycott. It's a political act of withdrawing my time, resources and support for something that I find deeply morally wrong. It's protest. I have a choice and I refuse.

                      LLMs are exploitative, destructive, biased, mediocre parroting machines. Using them has a negative impact on the climate, the arts, the quality of the internet, the job market, the economy, the accessibility of electronics, even on skill development, creativity and mental health. LLMs are made and trained on the unpaid labour of millions -if not billions- of people who didn't consent. Their generic output litter the path to finding anything by true human creators.

                      Wherever I can, for as long as I can, I reject LLMs and anything that is related to them. I'm boycotting.

                      m@martinh.netM This user is from outside of this forum
                      m@martinh.netM This user is from outside of this forum
                      m@martinh.net
                      wrote sidst redigeret af
                      #34

                      @reading_recluse My first thought was that the people wittering on about "purity culture" literally can't grasp the concept of collective action. But then it struck me that framing everything as an individual choice is a classic neoliberal tactic to defuse and dismantle opposition when it becomes a threat. So I say: Good work, keep it up!

                      violetmadder@kolektiva.socialV 1 Reply Last reply
                      0
                      • reading_recluse@c.imR reading_recluse@c.im

                        The LLM discourse on the Fediverse has really irked me the last few days.

                        Refusing to read writing made with the use of LLMs and refusing to give time to writers who use, promote or justify the use of LLMs is not purity culture, it's a boycott. It's a political act of withdrawing my time, resources and support for something that I find deeply morally wrong. It's protest. I have a choice and I refuse.

                        LLMs are exploitative, destructive, biased, mediocre parroting machines. Using them has a negative impact on the climate, the arts, the quality of the internet, the job market, the economy, the accessibility of electronics, even on skill development, creativity and mental health. LLMs are made and trained on the unpaid labour of millions -if not billions- of people who didn't consent. Their generic output litter the path to finding anything by true human creators.

                        Wherever I can, for as long as I can, I reject LLMs and anything that is related to them. I'm boycotting.

                        fluffgar@mastodon.scotF This user is from outside of this forum
                        fluffgar@mastodon.scotF This user is from outside of this forum
                        fluffgar@mastodon.scot
                        wrote sidst redigeret af
                        #35

                        @reading_recluse A small aid in assisting boycotting "A.I.": https://codeberg.org/just_a_husk/uBlockOrigin-AI-Blocklist

                        1 Reply Last reply
                        0
                        • xs4me2@mastodon.socialX xs4me2@mastodon.social

                          @dynamite_ready @lproven @reading_recluse

                          It is the user and their skills indeed. A hammer can be used skillfully or wrong...

                          lproven@social.vivaldi.netL This user is from outside of this forum
                          lproven@social.vivaldi.netL This user is from outside of this forum
                          lproven@social.vivaldi.net
                          wrote sidst redigeret af
                          #36

                          @xs4me2 @dynamite_ready @reading_recluse But it can't be used for brain surgery.

                          No, this is not a skills issue. It is based on profound misunderstanding. No they are not good search tools. No they are not good for research or learning, because they work only and entirely by *making stuff up* and if you're learning then you're not an expert and you can't tell true from false.

                          xs4me2@mastodon.socialX 1 Reply Last reply
                          0
                          • lproven@social.vivaldi.netL lproven@social.vivaldi.net

                            @xs4me2 @reading_recluse

                            > can be useful to digest and explore information at great speed

                            Nope. Still wrong. This is in fact something they are extremely and *dangerously* bad at.

                            phil@fed.bajsicki.comP This user is from outside of this forum
                            phil@fed.bajsicki.comP This user is from outside of this forum
                            phil@fed.bajsicki.com
                            wrote sidst redigeret af
                            #37

                            @lproven@social.vivaldi.net @xs4me2@mastodon.social @reading_recluse@c.im
                            Hasn't been my experience. What have you tested it with?

                            Even tiny models in the 4-12B range have been able to handle the things I need (though granted, not as well as the 24-30B range).

                            My use-case is saving my hands from typing up repetitive patterns, analyzing my journals on several angles (e.g. what's my average mood based on the wording I use in my journals, how does that relate to some medical things like migraines, etc.) and as a parrot that'll repeat my plans/ calendar to me in different words, so I can overcome my own biases easier.

                            I have found the available models entirely sufficient for these tasks.

                            Not for coding, though. Even the Qwen3-Coder-Next, which is an 80B behemoth just plain sucks at code.

                            Now to be clear - I'm not saying they're always
                            accurate when I use LLMs. I'm saying that because I use them with data I type up by hand and am intrinsically familiar with they save me time and mental effort, because spotting problems is easy.

                            I wouldn't use them for any subject which I'm not already well grounded in, and in that specific way, I agree with you.

                            But I also wouldn't say they're extremely or dangerously bad at digesting and exploring information, as such. Not moreso than code written by juniors without supervision.

                            Ultimately it's on the user to ensure the tool's output meets requirements.

                            Anecdotally, people aren't great at processing large amounts of information either. I work in infosec, and curate a rather complex inventory/risk/audit/reporting toolkit. I pull data from over a dozen critical systems and sub-systems, networks, etc, including vast amounts of hand-written documentation, guides and explanations about how all of this works together.

                            I'm still the only person capable of actually using the entire toolset in concert - not even going into further development/ integrations. Others rely on Cursor/ Claude Code to use them. And that's fine by me - I'd rather have tools that get used than tools that are entirely dependent on me.

                            I guess my point is that in this scenario the problem isn't LLMs themselves. The problem is people who don't take time to read and understand the requirements, input and output.

                            (Of course, this is putting aside the ethical/ political/ economic/ ecological problems, to keep this conversation more focused on the technical merits/demerits.)

                            lproven@social.vivaldi.netL xs4me2@mastodon.socialX 2 Replies Last reply
                            0
                            • phil@fed.bajsicki.comP phil@fed.bajsicki.com

                              @lproven@social.vivaldi.net @xs4me2@mastodon.social @reading_recluse@c.im
                              Hasn't been my experience. What have you tested it with?

                              Even tiny models in the 4-12B range have been able to handle the things I need (though granted, not as well as the 24-30B range).

                              My use-case is saving my hands from typing up repetitive patterns, analyzing my journals on several angles (e.g. what's my average mood based on the wording I use in my journals, how does that relate to some medical things like migraines, etc.) and as a parrot that'll repeat my plans/ calendar to me in different words, so I can overcome my own biases easier.

                              I have found the available models entirely sufficient for these tasks.

                              Not for coding, though. Even the Qwen3-Coder-Next, which is an 80B behemoth just plain sucks at code.

                              Now to be clear - I'm not saying they're always
                              accurate when I use LLMs. I'm saying that because I use them with data I type up by hand and am intrinsically familiar with they save me time and mental effort, because spotting problems is easy.

                              I wouldn't use them for any subject which I'm not already well grounded in, and in that specific way, I agree with you.

                              But I also wouldn't say they're extremely or dangerously bad at digesting and exploring information, as such. Not moreso than code written by juniors without supervision.

                              Ultimately it's on the user to ensure the tool's output meets requirements.

                              Anecdotally, people aren't great at processing large amounts of information either. I work in infosec, and curate a rather complex inventory/risk/audit/reporting toolkit. I pull data from over a dozen critical systems and sub-systems, networks, etc, including vast amounts of hand-written documentation, guides and explanations about how all of this works together.

                              I'm still the only person capable of actually using the entire toolset in concert - not even going into further development/ integrations. Others rely on Cursor/ Claude Code to use them. And that's fine by me - I'd rather have tools that get used than tools that are entirely dependent on me.

                              I guess my point is that in this scenario the problem isn't LLMs themselves. The problem is people who don't take time to read and understand the requirements, input and output.

                              (Of course, this is putting aside the ethical/ political/ economic/ ecological problems, to keep this conversation more focused on the technical merits/demerits.)

                              lproven@social.vivaldi.netL This user is from outside of this forum
                              lproven@social.vivaldi.netL This user is from outside of this forum
                              lproven@social.vivaldi.net
                              wrote sidst redigeret af
                              #38

                              @phil @xs4me2 @reading_recluse My current favourite paper on this:

                              https://ea.rna.nl/2024/05/27/when-chatgpt-summarises-it-actually-does-nothing-of-the-kind/

                              xs4me2@mastodon.socialX phil@fed.bajsicki.comP 2 Replies Last reply
                              0
                              • lproven@social.vivaldi.netL lproven@social.vivaldi.net

                                @xs4me2 @dynamite_ready @reading_recluse But it can't be used for brain surgery.

                                No, this is not a skills issue. It is based on profound misunderstanding. No they are not good search tools. No they are not good for research or learning, because they work only and entirely by *making stuff up* and if you're learning then you're not an expert and you can't tell true from false.

                                xs4me2@mastodon.socialX This user is from outside of this forum
                                xs4me2@mastodon.socialX This user is from outside of this forum
                                xs4me2@mastodon.social
                                wrote sidst redigeret af
                                #39

                                @lproven @dynamite_ready @reading_recluse

                                In my opinion, you are incorrect here, and a user is always responsible for digesting the assumed truth as they observe it. Especially on tools. There is no substitute for critical thinking. And there never will be.

                                Truth and social surrounds are infinitesimally more complex than analyzing a game of chess.

                                xs4me2@mastodon.socialX ben@mastodon.bentasker.co.ukB 2 Replies Last reply
                                0
                                • xs4me2@mastodon.socialX xs4me2@mastodon.social

                                  @lproven @dynamite_ready @reading_recluse

                                  In my opinion, you are incorrect here, and a user is always responsible for digesting the assumed truth as they observe it. Especially on tools. There is no substitute for critical thinking. And there never will be.

                                  Truth and social surrounds are infinitesimally more complex than analyzing a game of chess.

                                  xs4me2@mastodon.socialX This user is from outside of this forum
                                  xs4me2@mastodon.socialX This user is from outside of this forum
                                  xs4me2@mastodon.social
                                  wrote sidst redigeret af
                                  #40

                                  @lproven @dynamite_ready @reading_recluse

                                  LLM do not make up stuff perse, they use data, also wrong data and there is the danger, and in the fact that it cannot referee in what is right and what is wrong.

                                  1 Reply Last reply
                                  0
                                  • reading_recluse@c.imR reading_recluse@c.im

                                    The LLM discourse on the Fediverse has really irked me the last few days.

                                    Refusing to read writing made with the use of LLMs and refusing to give time to writers who use, promote or justify the use of LLMs is not purity culture, it's a boycott. It's a political act of withdrawing my time, resources and support for something that I find deeply morally wrong. It's protest. I have a choice and I refuse.

                                    LLMs are exploitative, destructive, biased, mediocre parroting machines. Using them has a negative impact on the climate, the arts, the quality of the internet, the job market, the economy, the accessibility of electronics, even on skill development, creativity and mental health. LLMs are made and trained on the unpaid labour of millions -if not billions- of people who didn't consent. Their generic output litter the path to finding anything by true human creators.

                                    Wherever I can, for as long as I can, I reject LLMs and anything that is related to them. I'm boycotting.

                                    mattijamsa@c.imM This user is from outside of this forum
                                    mattijamsa@c.imM This user is from outside of this forum
                                    mattijamsa@c.im
                                    wrote sidst redigeret af
                                    #41

                                    @reading_recluse I admit to having created songs with AI, pictures with AI, code with AI, clips of video with AI, everything more out of curiosity than nothing else. But generating text with AI, where is the fun in that...? AI generated text gives me that immediate uncanny valley effect more so than video, music, or pictures. I've quit buying the Sunday edition of a certain newspaper because reading some articles I was sure there's AI involved there. If I got that feeling reading a novel, what a disappointment that would be.

                                    nelchee@mastodon.artN 1 Reply Last reply
                                    0
                                    • phil@fed.bajsicki.comP phil@fed.bajsicki.com

                                      @lproven@social.vivaldi.net @xs4me2@mastodon.social @reading_recluse@c.im
                                      Hasn't been my experience. What have you tested it with?

                                      Even tiny models in the 4-12B range have been able to handle the things I need (though granted, not as well as the 24-30B range).

                                      My use-case is saving my hands from typing up repetitive patterns, analyzing my journals on several angles (e.g. what's my average mood based on the wording I use in my journals, how does that relate to some medical things like migraines, etc.) and as a parrot that'll repeat my plans/ calendar to me in different words, so I can overcome my own biases easier.

                                      I have found the available models entirely sufficient for these tasks.

                                      Not for coding, though. Even the Qwen3-Coder-Next, which is an 80B behemoth just plain sucks at code.

                                      Now to be clear - I'm not saying they're always
                                      accurate when I use LLMs. I'm saying that because I use them with data I type up by hand and am intrinsically familiar with they save me time and mental effort, because spotting problems is easy.

                                      I wouldn't use them for any subject which I'm not already well grounded in, and in that specific way, I agree with you.

                                      But I also wouldn't say they're extremely or dangerously bad at digesting and exploring information, as such. Not moreso than code written by juniors without supervision.

                                      Ultimately it's on the user to ensure the tool's output meets requirements.

                                      Anecdotally, people aren't great at processing large amounts of information either. I work in infosec, and curate a rather complex inventory/risk/audit/reporting toolkit. I pull data from over a dozen critical systems and sub-systems, networks, etc, including vast amounts of hand-written documentation, guides and explanations about how all of this works together.

                                      I'm still the only person capable of actually using the entire toolset in concert - not even going into further development/ integrations. Others rely on Cursor/ Claude Code to use them. And that's fine by me - I'd rather have tools that get used than tools that are entirely dependent on me.

                                      I guess my point is that in this scenario the problem isn't LLMs themselves. The problem is people who don't take time to read and understand the requirements, input and output.

                                      (Of course, this is putting aside the ethical/ political/ economic/ ecological problems, to keep this conversation more focused on the technical merits/demerits.)

                                      xs4me2@mastodon.socialX This user is from outside of this forum
                                      xs4me2@mastodon.socialX This user is from outside of this forum
                                      xs4me2@mastodon.social
                                      wrote sidst redigeret af
                                      #42

                                      @phil @lproven @reading_recluse

                                      Exactly, and as always truth and reality are nuanced. I will be using it, and I will use my critical thinking (always).

                                      1 Reply Last reply
                                      0
                                      • tseitr@mastodon.sdf.orgT tseitr@mastodon.sdf.org

                                        @papageier @reading_recluse machine-woven cloth was answering an essential need in a profitable capitalistic way. Can we say the same about LLM?

                                        I think it is not inevitable, but time will tell.

                                        johnnydecimal@hachyderm.ioJ This user is from outside of this forum
                                        johnnydecimal@hachyderm.ioJ This user is from outside of this forum
                                        johnnydecimal@hachyderm.io
                                        wrote sidst redigeret af
                                        #43

                                        @tseitr @papageier @reading_recluse My problem with this framing is: who gets to decide?

                                        Define 'essential'. Is a new generation of MacBooks 'essential'? Not really. The ones we have are amazing. But nobody's boycotting the progress being made in chip design.

                                        But the anti-LLM crowd seem to have decided: not having LLMs is 'enough'. Having them is superfluous. They're not 'needed'.

                                        I get the pushback. I'll never use one to write prose, because prose comes from my human heart.

                                        But to deny their utility in the world of code generation is to be dogmatic. The vast, vast majority of code generation isn't art: it's the rote stitching together of existing pieces to make a new thing.

                                        Claude is _much_ better at that than I am. If properly controlled by me the result is better and more secure.

                                        So, I use Claude. Just like I use an IDE and a higher-level language and just like I deploy to an edge network run by someone else vs. standing up my own. Because doing that is better than not doing that.

                                        skua@mastodon.socialS tseitr@mastodon.sdf.orgT harisont@mstdn.socialH 3 Replies Last reply
                                        0
                                        • reading_recluse@c.imR reading_recluse@c.im

                                          The LLM discourse on the Fediverse has really irked me the last few days.

                                          Refusing to read writing made with the use of LLMs and refusing to give time to writers who use, promote or justify the use of LLMs is not purity culture, it's a boycott. It's a political act of withdrawing my time, resources and support for something that I find deeply morally wrong. It's protest. I have a choice and I refuse.

                                          LLMs are exploitative, destructive, biased, mediocre parroting machines. Using them has a negative impact on the climate, the arts, the quality of the internet, the job market, the economy, the accessibility of electronics, even on skill development, creativity and mental health. LLMs are made and trained on the unpaid labour of millions -if not billions- of people who didn't consent. Their generic output litter the path to finding anything by true human creators.

                                          Wherever I can, for as long as I can, I reject LLMs and anything that is related to them. I'm boycotting.

                                          flashmobofone@mastodon.artF This user is from outside of this forum
                                          flashmobofone@mastodon.artF This user is from outside of this forum
                                          flashmobofone@mastodon.art
                                          wrote sidst redigeret af
                                          #44

                                          @reading_recluse This take bugs me so much. Calling boycotting of LLM's 'purity culture' is the dumbest ass take since Dems smeared Bernie as a sexist.

                                          1 Reply Last reply
                                          0
                                          Svar
                                          • Svar som emne
                                          Login for at svare
                                          • Ældste til nyeste
                                          • Nyeste til ældste
                                          • Most Votes


                                          • Log ind

                                          • Har du ikke en konto? Tilmeld

                                          • Login or register to search.
                                          Powered by NodeBB Contributors
                                          Graciously hosted by data.coop
                                          • First post
                                            Last post
                                          0
                                          • Hjem
                                          • Seneste
                                          • Etiketter
                                          • Populære
                                          • Verden
                                          • Bruger
                                          • Grupper