Skip to content
  • Hjem
  • Seneste
  • Etiketter
  • Populære
  • Verden
  • Bruger
  • Grupper
Temaer
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Kollaps
FARVEL BIG TECH
  1. Forside
  2. Ikke-kategoriseret
  3. The LLM discourse on the Fediverse has really irked me the last few days.

The LLM discourse on the Fediverse has really irked me the last few days.

Planlagt Fastgjort Låst Flyttet Ikke-kategoriseret
125 Indlæg 73 Posters 0 Visninger
  • Ældste til nyeste
  • Nyeste til ældste
  • Most Votes
Svar
  • Svar som emne
Login for at svare
Denne tråd er blevet slettet. Kun brugere med emne behandlings privilegier kan se den.
  • reading_recluse@c.imR reading_recluse@c.im

    The LLM discourse on the Fediverse has really irked me the last few days.

    Refusing to read writing made with the use of LLMs and refusing to give time to writers who use, promote or justify the use of LLMs is not purity culture, it's a boycott. It's a political act of withdrawing my time, resources and support for something that I find deeply morally wrong. It's protest. I have a choice and I refuse.

    LLMs are exploitative, destructive, biased, mediocre parroting machines. Using them has a negative impact on the climate, the arts, the quality of the internet, the job market, the economy, the accessibility of electronics, even on skill development, creativity and mental health. LLMs are made and trained on the unpaid labour of millions -if not billions- of people who didn't consent. Their generic output litter the path to finding anything by true human creators.

    Wherever I can, for as long as I can, I reject LLMs and anything that is related to them. I'm boycotting.

    C This user is from outside of this forum
    C This user is from outside of this forum
    captaincoffee@freeradical.zone
    wrote sidst redigeret af
    #32

    @reading_recluse that's a great point, you're right to point that out, and you've touched on a classic issue between humanity and LLMs

    1 Reply Last reply
    0
    • dynamite_ready@mastodon.gamedev.placeD dynamite_ready@mastodon.gamedev.place

      @lproven @xs4me2 @reading_recluse

      For generating content of any kind, I think there's a reckoning to come. Especially in the 'agentic' space.

      But for Information Retrieval, LLMs are great, tbh... I'd argue that also includes those far out stories about prompts leading to new scientific theories, or mathematical proofs.

      The tool is a big part of that, but it's the user ('operator'?) that writes the prompts, guides the outcomes, and validates them.

      That's a worthy advance.

      xs4me2@mastodon.socialX This user is from outside of this forum
      xs4me2@mastodon.socialX This user is from outside of this forum
      xs4me2@mastodon.social
      wrote sidst redigeret af
      #33

      @dynamite_ready @lproven @reading_recluse

      It is the user and their skills indeed. A hammer can be used skillfully or wrong...

      lproven@social.vivaldi.netL 1 Reply Last reply
      0
      • reading_recluse@c.imR reading_recluse@c.im

        The LLM discourse on the Fediverse has really irked me the last few days.

        Refusing to read writing made with the use of LLMs and refusing to give time to writers who use, promote or justify the use of LLMs is not purity culture, it's a boycott. It's a political act of withdrawing my time, resources and support for something that I find deeply morally wrong. It's protest. I have a choice and I refuse.

        LLMs are exploitative, destructive, biased, mediocre parroting machines. Using them has a negative impact on the climate, the arts, the quality of the internet, the job market, the economy, the accessibility of electronics, even on skill development, creativity and mental health. LLMs are made and trained on the unpaid labour of millions -if not billions- of people who didn't consent. Their generic output litter the path to finding anything by true human creators.

        Wherever I can, for as long as I can, I reject LLMs and anything that is related to them. I'm boycotting.

        m@martinh.netM This user is from outside of this forum
        m@martinh.netM This user is from outside of this forum
        m@martinh.net
        wrote sidst redigeret af
        #34

        @reading_recluse My first thought was that the people wittering on about "purity culture" literally can't grasp the concept of collective action. But then it struck me that framing everything as an individual choice is a classic neoliberal tactic to defuse and dismantle opposition when it becomes a threat. So I say: Good work, keep it up!

        violetmadder@kolektiva.socialV 1 Reply Last reply
        0
        • reading_recluse@c.imR reading_recluse@c.im

          The LLM discourse on the Fediverse has really irked me the last few days.

          Refusing to read writing made with the use of LLMs and refusing to give time to writers who use, promote or justify the use of LLMs is not purity culture, it's a boycott. It's a political act of withdrawing my time, resources and support for something that I find deeply morally wrong. It's protest. I have a choice and I refuse.

          LLMs are exploitative, destructive, biased, mediocre parroting machines. Using them has a negative impact on the climate, the arts, the quality of the internet, the job market, the economy, the accessibility of electronics, even on skill development, creativity and mental health. LLMs are made and trained on the unpaid labour of millions -if not billions- of people who didn't consent. Their generic output litter the path to finding anything by true human creators.

          Wherever I can, for as long as I can, I reject LLMs and anything that is related to them. I'm boycotting.

          fluffgar@mastodon.scotF This user is from outside of this forum
          fluffgar@mastodon.scotF This user is from outside of this forum
          fluffgar@mastodon.scot
          wrote sidst redigeret af
          #35

          @reading_recluse A small aid in assisting boycotting "A.I.": https://codeberg.org/just_a_husk/uBlockOrigin-AI-Blocklist

          1 Reply Last reply
          0
          • xs4me2@mastodon.socialX xs4me2@mastodon.social

            @dynamite_ready @lproven @reading_recluse

            It is the user and their skills indeed. A hammer can be used skillfully or wrong...

            lproven@social.vivaldi.netL This user is from outside of this forum
            lproven@social.vivaldi.netL This user is from outside of this forum
            lproven@social.vivaldi.net
            wrote sidst redigeret af
            #36

            @xs4me2 @dynamite_ready @reading_recluse But it can't be used for brain surgery.

            No, this is not a skills issue. It is based on profound misunderstanding. No they are not good search tools. No they are not good for research or learning, because they work only and entirely by *making stuff up* and if you're learning then you're not an expert and you can't tell true from false.

            xs4me2@mastodon.socialX 1 Reply Last reply
            0
            • lproven@social.vivaldi.netL lproven@social.vivaldi.net

              @xs4me2 @reading_recluse

              > can be useful to digest and explore information at great speed

              Nope. Still wrong. This is in fact something they are extremely and *dangerously* bad at.

              phil@fed.bajsicki.comP This user is from outside of this forum
              phil@fed.bajsicki.comP This user is from outside of this forum
              phil@fed.bajsicki.com
              wrote sidst redigeret af
              #37

              @lproven@social.vivaldi.net @xs4me2@mastodon.social @reading_recluse@c.im
              Hasn't been my experience. What have you tested it with?

              Even tiny models in the 4-12B range have been able to handle the things I need (though granted, not as well as the 24-30B range).

              My use-case is saving my hands from typing up repetitive patterns, analyzing my journals on several angles (e.g. what's my average mood based on the wording I use in my journals, how does that relate to some medical things like migraines, etc.) and as a parrot that'll repeat my plans/ calendar to me in different words, so I can overcome my own biases easier.

              I have found the available models entirely sufficient for these tasks.

              Not for coding, though. Even the Qwen3-Coder-Next, which is an 80B behemoth just plain sucks at code.

              Now to be clear - I'm not saying they're always
              accurate when I use LLMs. I'm saying that because I use them with data I type up by hand and am intrinsically familiar with they save me time and mental effort, because spotting problems is easy.

              I wouldn't use them for any subject which I'm not already well grounded in, and in that specific way, I agree with you.

              But I also wouldn't say they're extremely or dangerously bad at digesting and exploring information, as such. Not moreso than code written by juniors without supervision.

              Ultimately it's on the user to ensure the tool's output meets requirements.

              Anecdotally, people aren't great at processing large amounts of information either. I work in infosec, and curate a rather complex inventory/risk/audit/reporting toolkit. I pull data from over a dozen critical systems and sub-systems, networks, etc, including vast amounts of hand-written documentation, guides and explanations about how all of this works together.

              I'm still the only person capable of actually using the entire toolset in concert - not even going into further development/ integrations. Others rely on Cursor/ Claude Code to use them. And that's fine by me - I'd rather have tools that get used than tools that are entirely dependent on me.

              I guess my point is that in this scenario the problem isn't LLMs themselves. The problem is people who don't take time to read and understand the requirements, input and output.

              (Of course, this is putting aside the ethical/ political/ economic/ ecological problems, to keep this conversation more focused on the technical merits/demerits.)

              lproven@social.vivaldi.netL xs4me2@mastodon.socialX 2 Replies Last reply
              0
              • phil@fed.bajsicki.comP phil@fed.bajsicki.com

                @lproven@social.vivaldi.net @xs4me2@mastodon.social @reading_recluse@c.im
                Hasn't been my experience. What have you tested it with?

                Even tiny models in the 4-12B range have been able to handle the things I need (though granted, not as well as the 24-30B range).

                My use-case is saving my hands from typing up repetitive patterns, analyzing my journals on several angles (e.g. what's my average mood based on the wording I use in my journals, how does that relate to some medical things like migraines, etc.) and as a parrot that'll repeat my plans/ calendar to me in different words, so I can overcome my own biases easier.

                I have found the available models entirely sufficient for these tasks.

                Not for coding, though. Even the Qwen3-Coder-Next, which is an 80B behemoth just plain sucks at code.

                Now to be clear - I'm not saying they're always
                accurate when I use LLMs. I'm saying that because I use them with data I type up by hand and am intrinsically familiar with they save me time and mental effort, because spotting problems is easy.

                I wouldn't use them for any subject which I'm not already well grounded in, and in that specific way, I agree with you.

                But I also wouldn't say they're extremely or dangerously bad at digesting and exploring information, as such. Not moreso than code written by juniors without supervision.

                Ultimately it's on the user to ensure the tool's output meets requirements.

                Anecdotally, people aren't great at processing large amounts of information either. I work in infosec, and curate a rather complex inventory/risk/audit/reporting toolkit. I pull data from over a dozen critical systems and sub-systems, networks, etc, including vast amounts of hand-written documentation, guides and explanations about how all of this works together.

                I'm still the only person capable of actually using the entire toolset in concert - not even going into further development/ integrations. Others rely on Cursor/ Claude Code to use them. And that's fine by me - I'd rather have tools that get used than tools that are entirely dependent on me.

                I guess my point is that in this scenario the problem isn't LLMs themselves. The problem is people who don't take time to read and understand the requirements, input and output.

                (Of course, this is putting aside the ethical/ political/ economic/ ecological problems, to keep this conversation more focused on the technical merits/demerits.)

                lproven@social.vivaldi.netL This user is from outside of this forum
                lproven@social.vivaldi.netL This user is from outside of this forum
                lproven@social.vivaldi.net
                wrote sidst redigeret af
                #38

                @phil @xs4me2 @reading_recluse My current favourite paper on this:

                https://ea.rna.nl/2024/05/27/when-chatgpt-summarises-it-actually-does-nothing-of-the-kind/

                xs4me2@mastodon.socialX phil@fed.bajsicki.comP 2 Replies Last reply
                0
                • lproven@social.vivaldi.netL lproven@social.vivaldi.net

                  @xs4me2 @dynamite_ready @reading_recluse But it can't be used for brain surgery.

                  No, this is not a skills issue. It is based on profound misunderstanding. No they are not good search tools. No they are not good for research or learning, because they work only and entirely by *making stuff up* and if you're learning then you're not an expert and you can't tell true from false.

                  xs4me2@mastodon.socialX This user is from outside of this forum
                  xs4me2@mastodon.socialX This user is from outside of this forum
                  xs4me2@mastodon.social
                  wrote sidst redigeret af
                  #39

                  @lproven @dynamite_ready @reading_recluse

                  In my opinion, you are incorrect here, and a user is always responsible for digesting the assumed truth as they observe it. Especially on tools. There is no substitute for critical thinking. And there never will be.

                  Truth and social surrounds are infinitesimally more complex than analyzing a game of chess.

                  xs4me2@mastodon.socialX ben@mastodon.bentasker.co.ukB 2 Replies Last reply
                  0
                  • xs4me2@mastodon.socialX xs4me2@mastodon.social

                    @lproven @dynamite_ready @reading_recluse

                    In my opinion, you are incorrect here, and a user is always responsible for digesting the assumed truth as they observe it. Especially on tools. There is no substitute for critical thinking. And there never will be.

                    Truth and social surrounds are infinitesimally more complex than analyzing a game of chess.

                    xs4me2@mastodon.socialX This user is from outside of this forum
                    xs4me2@mastodon.socialX This user is from outside of this forum
                    xs4me2@mastodon.social
                    wrote sidst redigeret af
                    #40

                    @lproven @dynamite_ready @reading_recluse

                    LLM do not make up stuff perse, they use data, also wrong data and there is the danger, and in the fact that it cannot referee in what is right and what is wrong.

                    1 Reply Last reply
                    0
                    • reading_recluse@c.imR reading_recluse@c.im

                      The LLM discourse on the Fediverse has really irked me the last few days.

                      Refusing to read writing made with the use of LLMs and refusing to give time to writers who use, promote or justify the use of LLMs is not purity culture, it's a boycott. It's a political act of withdrawing my time, resources and support for something that I find deeply morally wrong. It's protest. I have a choice and I refuse.

                      LLMs are exploitative, destructive, biased, mediocre parroting machines. Using them has a negative impact on the climate, the arts, the quality of the internet, the job market, the economy, the accessibility of electronics, even on skill development, creativity and mental health. LLMs are made and trained on the unpaid labour of millions -if not billions- of people who didn't consent. Their generic output litter the path to finding anything by true human creators.

                      Wherever I can, for as long as I can, I reject LLMs and anything that is related to them. I'm boycotting.

                      mattijamsa@c.imM This user is from outside of this forum
                      mattijamsa@c.imM This user is from outside of this forum
                      mattijamsa@c.im
                      wrote sidst redigeret af
                      #41

                      @reading_recluse I admit to having created songs with AI, pictures with AI, code with AI, clips of video with AI, everything more out of curiosity than nothing else. But generating text with AI, where is the fun in that...? AI generated text gives me that immediate uncanny valley effect more so than video, music, or pictures. I've quit buying the Sunday edition of a certain newspaper because reading some articles I was sure there's AI involved there. If I got that feeling reading a novel, what a disappointment that would be.

                      nelchee@mastodon.artN 1 Reply Last reply
                      0
                      • phil@fed.bajsicki.comP phil@fed.bajsicki.com

                        @lproven@social.vivaldi.net @xs4me2@mastodon.social @reading_recluse@c.im
                        Hasn't been my experience. What have you tested it with?

                        Even tiny models in the 4-12B range have been able to handle the things I need (though granted, not as well as the 24-30B range).

                        My use-case is saving my hands from typing up repetitive patterns, analyzing my journals on several angles (e.g. what's my average mood based on the wording I use in my journals, how does that relate to some medical things like migraines, etc.) and as a parrot that'll repeat my plans/ calendar to me in different words, so I can overcome my own biases easier.

                        I have found the available models entirely sufficient for these tasks.

                        Not for coding, though. Even the Qwen3-Coder-Next, which is an 80B behemoth just plain sucks at code.

                        Now to be clear - I'm not saying they're always
                        accurate when I use LLMs. I'm saying that because I use them with data I type up by hand and am intrinsically familiar with they save me time and mental effort, because spotting problems is easy.

                        I wouldn't use them for any subject which I'm not already well grounded in, and in that specific way, I agree with you.

                        But I also wouldn't say they're extremely or dangerously bad at digesting and exploring information, as such. Not moreso than code written by juniors without supervision.

                        Ultimately it's on the user to ensure the tool's output meets requirements.

                        Anecdotally, people aren't great at processing large amounts of information either. I work in infosec, and curate a rather complex inventory/risk/audit/reporting toolkit. I pull data from over a dozen critical systems and sub-systems, networks, etc, including vast amounts of hand-written documentation, guides and explanations about how all of this works together.

                        I'm still the only person capable of actually using the entire toolset in concert - not even going into further development/ integrations. Others rely on Cursor/ Claude Code to use them. And that's fine by me - I'd rather have tools that get used than tools that are entirely dependent on me.

                        I guess my point is that in this scenario the problem isn't LLMs themselves. The problem is people who don't take time to read and understand the requirements, input and output.

                        (Of course, this is putting aside the ethical/ political/ economic/ ecological problems, to keep this conversation more focused on the technical merits/demerits.)

                        xs4me2@mastodon.socialX This user is from outside of this forum
                        xs4me2@mastodon.socialX This user is from outside of this forum
                        xs4me2@mastodon.social
                        wrote sidst redigeret af
                        #42

                        @phil @lproven @reading_recluse

                        Exactly, and as always truth and reality are nuanced. I will be using it, and I will use my critical thinking (always).

                        1 Reply Last reply
                        0
                        • tseitr@mastodon.sdf.orgT tseitr@mastodon.sdf.org

                          @papageier @reading_recluse machine-woven cloth was answering an essential need in a profitable capitalistic way. Can we say the same about LLM?

                          I think it is not inevitable, but time will tell.

                          johnnydecimal@hachyderm.ioJ This user is from outside of this forum
                          johnnydecimal@hachyderm.ioJ This user is from outside of this forum
                          johnnydecimal@hachyderm.io
                          wrote sidst redigeret af
                          #43

                          @tseitr @papageier @reading_recluse My problem with this framing is: who gets to decide?

                          Define 'essential'. Is a new generation of MacBooks 'essential'? Not really. The ones we have are amazing. But nobody's boycotting the progress being made in chip design.

                          But the anti-LLM crowd seem to have decided: not having LLMs is 'enough'. Having them is superfluous. They're not 'needed'.

                          I get the pushback. I'll never use one to write prose, because prose comes from my human heart.

                          But to deny their utility in the world of code generation is to be dogmatic. The vast, vast majority of code generation isn't art: it's the rote stitching together of existing pieces to make a new thing.

                          Claude is _much_ better at that than I am. If properly controlled by me the result is better and more secure.

                          So, I use Claude. Just like I use an IDE and a higher-level language and just like I deploy to an edge network run by someone else vs. standing up my own. Because doing that is better than not doing that.

                          skua@mastodon.socialS tseitr@mastodon.sdf.orgT harisont@mstdn.socialH 3 Replies Last reply
                          0
                          • reading_recluse@c.imR reading_recluse@c.im

                            The LLM discourse on the Fediverse has really irked me the last few days.

                            Refusing to read writing made with the use of LLMs and refusing to give time to writers who use, promote or justify the use of LLMs is not purity culture, it's a boycott. It's a political act of withdrawing my time, resources and support for something that I find deeply morally wrong. It's protest. I have a choice and I refuse.

                            LLMs are exploitative, destructive, biased, mediocre parroting machines. Using them has a negative impact on the climate, the arts, the quality of the internet, the job market, the economy, the accessibility of electronics, even on skill development, creativity and mental health. LLMs are made and trained on the unpaid labour of millions -if not billions- of people who didn't consent. Their generic output litter the path to finding anything by true human creators.

                            Wherever I can, for as long as I can, I reject LLMs and anything that is related to them. I'm boycotting.

                            flashmobofone@mastodon.artF This user is from outside of this forum
                            flashmobofone@mastodon.artF This user is from outside of this forum
                            flashmobofone@mastodon.art
                            wrote sidst redigeret af
                            #44

                            @reading_recluse This take bugs me so much. Calling boycotting of LLM's 'purity culture' is the dumbest ass take since Dems smeared Bernie as a sexist.

                            1 Reply Last reply
                            0
                            • lproven@social.vivaldi.netL lproven@social.vivaldi.net

                              @phil @xs4me2 @reading_recluse My current favourite paper on this:

                              https://ea.rna.nl/2024/05/27/when-chatgpt-summarises-it-actually-does-nothing-of-the-kind/

                              xs4me2@mastodon.socialX This user is from outside of this forum
                              xs4me2@mastodon.socialX This user is from outside of this forum
                              xs4me2@mastodon.social
                              wrote sidst redigeret af
                              #45

                              @lproven @phil @reading_recluse

                              There is no substitute for reading the final material of your subject to study by yourself. Line by line and internalizing it. I remember the days of our paper scientific library where I would stay a whole afternoon and would review Phys Rev B, Applied Physics, Applied Optics and more on the topic of my research and in the end had a stack of paper copies I took home to read. Basically that has not changed by online use but got so much more fast and efficient.

                              1 Reply Last reply
                              0
                              • xs4me2@mastodon.socialX xs4me2@mastodon.social

                                @lproven @dynamite_ready @reading_recluse

                                In my opinion, you are incorrect here, and a user is always responsible for digesting the assumed truth as they observe it. Especially on tools. There is no substitute for critical thinking. And there never will be.

                                Truth and social surrounds are infinitesimally more complex than analyzing a game of chess.

                                ben@mastodon.bentasker.co.ukB This user is from outside of this forum
                                ben@mastodon.bentasker.co.ukB This user is from outside of this forum
                                ben@mastodon.bentasker.co.uk
                                wrote sidst redigeret af
                                #46

                                @xs4me2 @lproven @dynamite_ready @reading_recluse

                                What you're essentially suggesting here, is that LLMs are only good for consuming information if the user either already has the knowledge to judge output (in which case, why are they asking?) or spends time to verify the claims that the LLM makes (in which case, why bother asking the LLM?).

                                I've seen them make some pretty important mistakes, including suggesting that a Director who wasn't on the call being summarised had authorised something

                                xs4me2@mastodon.socialX lproven@social.vivaldi.netL dynamite_ready@mastodon.gamedev.placeD 3 Replies Last reply
                                0
                                • ben@mastodon.bentasker.co.ukB ben@mastodon.bentasker.co.uk

                                  @xs4me2 @lproven @dynamite_ready @reading_recluse

                                  What you're essentially suggesting here, is that LLMs are only good for consuming information if the user either already has the knowledge to judge output (in which case, why are they asking?) or spends time to verify the claims that the LLM makes (in which case, why bother asking the LLM?).

                                  I've seen them make some pretty important mistakes, including suggesting that a Director who wasn't on the call being summarised had authorised something

                                  xs4me2@mastodon.socialX This user is from outside of this forum
                                  xs4me2@mastodon.socialX This user is from outside of this forum
                                  xs4me2@mastodon.social
                                  wrote sidst redigeret af
                                  #47

                                  @ben @lproven @dynamite_ready @reading_recluse

                                  I am suggesting that a competent user can use tools in the right way indeed and only by their in-depth knowledge of them. You can call that craftsmanship, experience, or simply domain knowledge.

                                  It does not imply that tools nor LLM are useless, nor that they are without danger. A sharp chisel can cut off your finger. A poorly configured LLM can provide you with a load of nonsense...

                                  lproven@social.vivaldi.netL 1 Reply Last reply
                                  0
                                  • lproven@social.vivaldi.netL lproven@social.vivaldi.net

                                    @phil @xs4me2 @reading_recluse My current favourite paper on this:

                                    https://ea.rna.nl/2024/05/27/when-chatgpt-summarises-it-actually-does-nothing-of-the-kind/

                                    phil@fed.bajsicki.comP This user is from outside of this forum
                                    phil@fed.bajsicki.comP This user is from outside of this forum
                                    phil@fed.bajsicki.com
                                    wrote sidst redigeret af
                                    #48

                                    @lproven@social.vivaldi.net @xs4me2@mastodon.social @reading_recluse@c.im
                                    1. Paper from nearly 2 years ago. A lot has changed. Not to mention the 'test' the author (can't find their name, sorry) did is pretty dumb. It's much better to use an API, where you can control the full input pipeline to ensure the vendor isn't adding hidden instructions without your knowledge.
                                    2. I already addressed the point in my previous comment - it's on the user to verify that
                                    tools have correct output. Relying on an LLM to do the reading in one's stead is a recipe for disaster.

                                    You haven't said anything about YOUR use-case, experience, or the tests you tried.

                                    I'm genuinely curious, what do you imagine using an LLM is like?

                                    The reason I ask is because a lot of the criticism and panicking (sometimes crossing into outright disrespect and bigotry) I see online comes from an assumption that using an LLM is predicated on turning off one's brain and taking the output at face value... something that we shouldn't be doing with any software anyway.

                                    I guess put another way: I don't believe that the problems people attribute to LLMs are specific to LLMs. How many instances were there where management/ execs took Excel output as fact, when the formulas were set up wrong?

                                    These statistical models are no different.

                                    1 Reply Last reply
                                    0
                                    • reading_recluse@c.imR reading_recluse@c.im

                                      The LLM discourse on the Fediverse has really irked me the last few days.

                                      Refusing to read writing made with the use of LLMs and refusing to give time to writers who use, promote or justify the use of LLMs is not purity culture, it's a boycott. It's a political act of withdrawing my time, resources and support for something that I find deeply morally wrong. It's protest. I have a choice and I refuse.

                                      LLMs are exploitative, destructive, biased, mediocre parroting machines. Using them has a negative impact on the climate, the arts, the quality of the internet, the job market, the economy, the accessibility of electronics, even on skill development, creativity and mental health. LLMs are made and trained on the unpaid labour of millions -if not billions- of people who didn't consent. Their generic output litter the path to finding anything by true human creators.

                                      Wherever I can, for as long as I can, I reject LLMs and anything that is related to them. I'm boycotting.

                                      art_histories@mastodon.socialA This user is from outside of this forum
                                      art_histories@mastodon.socialA This user is from outside of this forum
                                      art_histories@mastodon.social
                                      wrote sidst redigeret af
                                      #49

                                      @reading_recluse Completely going d'accors. Also LLM produced "art" is so dull. I don't want to read it. For some reason my brain starts to shut down when reading an LLM produced text. I forget the picture as soon as I close it. Same with music. AI generated voices are so grating. The artificiality of it all makes me mad. It doesn't challenge me, it doesn't tell me anything, there is nothing intentional behind it. It's just - nothing. And it destroys the environment.

                                      1 Reply Last reply
                                      0
                                      • reading_recluse@c.imR reading_recluse@c.im

                                        The LLM discourse on the Fediverse has really irked me the last few days.

                                        Refusing to read writing made with the use of LLMs and refusing to give time to writers who use, promote or justify the use of LLMs is not purity culture, it's a boycott. It's a political act of withdrawing my time, resources and support for something that I find deeply morally wrong. It's protest. I have a choice and I refuse.

                                        LLMs are exploitative, destructive, biased, mediocre parroting machines. Using them has a negative impact on the climate, the arts, the quality of the internet, the job market, the economy, the accessibility of electronics, even on skill development, creativity and mental health. LLMs are made and trained on the unpaid labour of millions -if not billions- of people who didn't consent. Their generic output litter the path to finding anything by true human creators.

                                        Wherever I can, for as long as I can, I reject LLMs and anything that is related to them. I'm boycotting.

                                        ox1de@cyberplace.socialO This user is from outside of this forum
                                        ox1de@cyberplace.socialO This user is from outside of this forum
                                        ox1de@cyberplace.social
                                        wrote sidst redigeret af
                                        #50

                                        @reading_recluse u do u

                                        1 Reply Last reply
                                        0
                                        • reading_recluse@c.imR reading_recluse@c.im

                                          The LLM discourse on the Fediverse has really irked me the last few days.

                                          Refusing to read writing made with the use of LLMs and refusing to give time to writers who use, promote or justify the use of LLMs is not purity culture, it's a boycott. It's a political act of withdrawing my time, resources and support for something that I find deeply morally wrong. It's protest. I have a choice and I refuse.

                                          LLMs are exploitative, destructive, biased, mediocre parroting machines. Using them has a negative impact on the climate, the arts, the quality of the internet, the job market, the economy, the accessibility of electronics, even on skill development, creativity and mental health. LLMs are made and trained on the unpaid labour of millions -if not billions- of people who didn't consent. Their generic output litter the path to finding anything by true human creators.

                                          Wherever I can, for as long as I can, I reject LLMs and anything that is related to them. I'm boycotting.

                                          fergabell@zeroes.caF This user is from outside of this forum
                                          fergabell@zeroes.caF This user is from outside of this forum
                                          fergabell@zeroes.ca
                                          wrote sidst redigeret af
                                          #51

                                          @reading_recluse What disgusts me is the total disconnect from the natural world and the devastating effects of human activity in most forms on nature. We are hurtling toward ecocide and massive planetary collapse of current life forms. And what do they do? Grasp and exploit and posture and perform and strut in their massive ignorance of how a closed, interdependent, symbiotic living system actually works. The human supremacy religion means the death of all of us and a magical world full of beauty and wonder gone before its time.

                                          reading_recluse@c.imR 1 Reply Last reply
                                          0
                                          Svar
                                          • Svar som emne
                                          Login for at svare
                                          • Ældste til nyeste
                                          • Nyeste til ældste
                                          • Most Votes


                                          • Log ind

                                          • Har du ikke en konto? Tilmeld

                                          • Login or register to search.
                                          Powered by NodeBB Contributors
                                          Graciously hosted by data.coop
                                          • First post
                                            Last post
                                          0
                                          • Hjem
                                          • Seneste
                                          • Etiketter
                                          • Populære
                                          • Verden
                                          • Bruger
                                          • Grupper