Skip to content
  • Hjem
  • Seneste
  • Etiketter
  • Populære
  • Verden
  • Bruger
  • Grupper
Temaer
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Kollaps
FARVEL BIG TECH
  1. Forside
  2. Ikke-kategoriseret
  3. Don't anthropomorphize LLMs, language is important.

Don't anthropomorphize LLMs, language is important.

Planlagt Fastgjort Låst Flyttet Ikke-kategoriseret
26 Indlæg 22 Posters 41 Visninger
  • Ældste til nyeste
  • Nyeste til ældste
  • Most Votes
Svar
  • Svar som emne
Login for at svare
Denne tråd er blevet slettet. Kun brugere med emne behandlings privilegier kan se den.
  • kinou@lgbtqia.spaceK kinou@lgbtqia.space

    @Andres4NY

    @gabrielesvelto

    I might have missed a chapter but my interpretation is someone has prompted their llm to generate this text and then posted it no? The way I saw this narrated is like the llm reacted to the prompt "PR closed" by creating a blog post. But to do that, you need an human operator no?

    gbargoud@masto.nycG This user is from outside of this forum
    gbargoud@masto.nycG This user is from outside of this forum
    gbargoud@masto.nyc
    wrote sidst redigeret af
    #10

    @kinou @Andres4NY @gabrielesvelto

    Not necessarily, it just needs access to a blog post making API and some training data that got it to auto complete "I got my PR rejected because it was garbage" with "and then wrote a blog post about it".

    A lot of people have provided that training data

    1 Reply Last reply
    0
    • gabrielesvelto@mas.toG gabrielesvelto@mas.to

      Don't anthropomorphize LLMs, language is important. Say "the bot generated some text" not "the AI replied". Use "this document contains machine-generated text" not "this work is AI-assisted". See how people squirm when you call out their slop this way.

      giacomo@snac.tesio.itG This user is from outside of this forum
      giacomo@snac.tesio.itG This user is from outside of this forum
      giacomo@snac.tesio.it
      wrote sidst redigeret af
      #11
      @gabrielesvelto@mas.to

      Even talking about "text", in the context of #LLM, is a subtle anthropomorphization.

      Text is a sequence of symbols used by human minds to express information that they want to syncronize a little with other human minds (aka communicate).

      Such syncronization is always partial and imperfect, since each mind has different experiences and informations that will integrate the new message, but it's good enough to allow humanity to collaborate and to build culture and science.

      A statistically programmed software has no mind, so even when it's optimized to produce output that can fool a human and pass the #Turing test, such output hold no meaning, since no human experience or thought is expressed there.

      It's just the partial decompression of a lossy compression of a huge amount of text. And if it wasn't enough to show the lack of any meaning, the decompression process includes random input that is there to provide the illusion of autonomy.

      So instead of "the AI replied" I'd suggest "the bot computed this output" and instead of "this work is AI-assisted" I'd suggest "this is statistically computed output".
      1 Reply Last reply
      0
      • gabrielesvelto@mas.toG gabrielesvelto@mas.to

        @kinou @Andres4NY not necessarily, or at least not as a follow-up. The operator might have primed the bot to follow this course of action in the original prompt, and included all the necessary permissions to let it publish the generated post automatically.

        andres4ny@social.ridetrans.itA This user is from outside of this forum
        andres4ny@social.ridetrans.itA This user is from outside of this forum
        andres4ny@social.ridetrans.it
        wrote sidst redigeret af
        #12

        @gabrielesvelto @kinou Yeah, it's unclear how much of this is human-directed, and how much is automated. Like, if a bot is trained on aggressive attempts to get patches merged, then that's the behavior it will emulate. Or an actual human could be directing it to act like an asshole in an attempt to get patches merged.

        lauerhahn@sfba.socialL 1 Reply Last reply
        0
        • gabrielesvelto@mas.toG gabrielesvelto@mas.to

          Don't anthropomorphize LLMs, language is important. Say "the bot generated some text" not "the AI replied". Use "this document contains machine-generated text" not "this work is AI-assisted". See how people squirm when you call out their slop this way.

          irfelixr@discuss.systemsI This user is from outside of this forum
          irfelixr@discuss.systemsI This user is from outside of this forum
          irfelixr@discuss.systems
          wrote sidst redigeret af
          #13

          @gabrielesvelto
          Yes 💯

          1 Reply Last reply
          0
          • gabrielesvelto@mas.toG gabrielesvelto@mas.to

            Don't anthropomorphize LLMs, language is important. Say "the bot generated some text" not "the AI replied". Use "this document contains machine-generated text" not "this work is AI-assisted". See how people squirm when you call out their slop this way.

            nini@oldbytes.spaceN This user is from outside of this forum
            nini@oldbytes.spaceN This user is from outside of this forum
            nini@oldbytes.space
            wrote sidst redigeret af
            #14

            @gabrielesvelto "This is digital noise your brain perceives as words like a paredolic blob or a shadow cast on a wall. Do not interpret it as anything other than dirt smears on the window of reality that reminds you of information."

            1 Reply Last reply
            0
            • gabrielesvelto@mas.toG gabrielesvelto@mas.to

              Don't anthropomorphize LLMs, language is important. Say "the bot generated some text" not "the AI replied". Use "this document contains machine-generated text" not "this work is AI-assisted". See how people squirm when you call out their slop this way.

              cb@social.lolC This user is from outside of this forum
              cb@social.lolC This user is from outside of this forum
              cb@social.lol
              wrote sidst redigeret af
              #15

              @gabrielesvelto The other day my wife showed me a video of ChatGPT communicating with a male voice. At first, I referred to "him" and immediately corrected that to "it."

              1 Reply Last reply
              0
              • jrdepriest@infosec.exchangeJ This user is from outside of this forum
                jrdepriest@infosec.exchangeJ This user is from outside of this forum
                jrdepriest@infosec.exchange
                wrote sidst redigeret af
                #16

                @mark @gabrielesvelto

                At least potted plants are living things.

                And nobody tries to say a washing machine will magically birth AGI (as far as I know).

                It's not the "talking to things" part that's madness. It's the belief that a machine that can match tokens and spit out some text that resembles a valid reply is a sign of true intelligence.

                When I punch in 5 * 5 into a calculator and hit =, I shouldn't ascribe the glowing 25 to any machine intelligence. It should be the same for LLM powered genAI, but that "natural language" throws us off. Our brains aren't used to dealing with (often) coherent language generated by an unthinking statistical engine doing math on giant matrices.

                1 Reply Last reply
                0
                • gabrielesvelto@mas.toG gabrielesvelto@mas.to

                  Don't anthropomorphize LLMs, language is important. Say "the bot generated some text" not "the AI replied". Use "this document contains machine-generated text" not "this work is AI-assisted". See how people squirm when you call out their slop this way.

                  orangefloss@mastodon.socialO This user is from outside of this forum
                  orangefloss@mastodon.socialO This user is from outside of this forum
                  orangefloss@mastodon.social
                  wrote sidst redigeret af
                  #17

                  @gabrielesvelto couldn’t agree more with this ethic. The psychological impacts of users ie society believing that LLMs are people and fufilling roles that actual humans should, will probably unfold over the years and decades. All because regulators circa 2024/5/6 believed it was over reach to demand LLMs don’t use anthropomorphic language and narrative style. Prompt: “what do you think?” Reply: “there is no “I”. This is a machine generated response, not a conscious self.” - sounds better to me.

                  1 Reply Last reply
                  0
                  • gabrielesvelto@mas.toG gabrielesvelto@mas.to

                    Don't anthropomorphize LLMs, language is important. Say "the bot generated some text" not "the AI replied". Use "this document contains machine-generated text" not "this work is AI-assisted". See how people squirm when you call out their slop this way.

                    rupert@mastodon.nzR This user is from outside of this forum
                    rupert@mastodon.nzR This user is from outside of this forum
                    rupert@mastodon.nz
                    wrote sidst redigeret af
                    #18

                    @gabrielesvelto I'm trying to get people to use the neologism "apokrisoid" for an answer-shaped object. The LLM does not and cannot produce actual answers.
                    #apokrisoid

                    1 Reply Last reply
                    0
                    • gabrielesvelto@mas.toG gabrielesvelto@mas.to

                      Don't anthropomorphize LLMs, language is important. Say "the bot generated some text" not "the AI replied". Use "this document contains machine-generated text" not "this work is AI-assisted". See how people squirm when you call out their slop this way.

                      opalideas@mindly.socialO This user is from outside of this forum
                      opalideas@mindly.socialO This user is from outside of this forum
                      opalideas@mindly.social
                      wrote sidst redigeret af
                      #19

                      @gabrielesvelto Exactly. But the media (and hence the public) like to use short-forms, whether accurate of not. I do a presentation to folks about AI (The Good, The Bad and The Ugly), after which everybody keeps referring to "AI", not machine language. !!!!!

                      1 Reply Last reply
                      0
                      • gabrielesvelto@mas.toG gabrielesvelto@mas.to

                        Don't anthropomorphize LLMs, language is important. Say "the bot generated some text" not "the AI replied". Use "this document contains machine-generated text" not "this work is AI-assisted". See how people squirm when you call out their slop this way.

                        mjdxp@labyrinth.zoneM This user is from outside of this forum
                        mjdxp@labyrinth.zoneM This user is from outside of this forum
                        mjdxp@labyrinth.zone
                        wrote sidst redigeret af
                        #20
                        @gabrielesvelto i try to limit my LLM use because it's fundamentally evil, but whenever i do use it i never treat it like a person. i believe that's how people become addicted to chatbots. it is not an intelligent being with experiences and feelings, it's a cold machine that just uses an algorithm to arrange words from a database in a way said algorithm is tweaked to sound like human writing. our brains struggle to understand that, which is how you end up with people abandoning their real friends for AI bots and even considering them to be romantic partners.

                        also i've heard people say shit like "i always say thank you whenever i ask the AI for help with something so they'll hopefully spare me when the robot uprising comes", and i can't honestly tell if they're joking or not. if not, maybe we should be fighting against the people who are funding these robots you're so scared of? by the way, i highly doubt any sort of robot uprising will happen anytime soon, chatGPT has a fucking existential crisis if you do something as simple as ask for the non-existent seahorse emoji, it's not smart
                        1 Reply Last reply
                        0
                        • gabrielesvelto@mas.toG gabrielesvelto@mas.to

                          Don't anthropomorphize LLMs, language is important. Say "the bot generated some text" not "the AI replied". Use "this document contains machine-generated text" not "this work is AI-assisted". See how people squirm when you call out their slop this way.

                          apophis@yourwalls.todayA This user is from outside of this forum
                          apophis@yourwalls.todayA This user is from outside of this forum
                          apophis@yourwalls.today
                          wrote sidst redigeret af
                          #21
                          @gabrielesvelto "the slop machine churned out"

                          resisting the temptation to even say "shat"
                          1 Reply Last reply
                          0
                          • gabrielesvelto@mas.toG gabrielesvelto@mas.to

                            Don't anthropomorphize LLMs, language is important. Say "the bot generated some text" not "the AI replied". Use "this document contains machine-generated text" not "this work is AI-assisted". See how people squirm when you call out their slop this way.

                            kitkat_blue@mastodon.socialK This user is from outside of this forum
                            kitkat_blue@mastodon.socialK This user is from outside of this forum
                            kitkat_blue@mastodon.social
                            wrote sidst redigeret af
                            #22

                            @gabrielesvelto

                            oooorrrr...... 'the clanker clanked out some text'! 😀

                            "this document contains clanker-sourced text droppings'! 😋

                            1 Reply Last reply
                            0
                            • gabrielesvelto@mas.toG gabrielesvelto@mas.to

                              Don't anthropomorphize LLMs, language is important. Say "the bot generated some text" not "the AI replied". Use "this document contains machine-generated text" not "this work is AI-assisted". See how people squirm when you call out their slop this way.

                              lucydev@wetdry.worldL This user is from outside of this forum
                              lucydev@wetdry.worldL This user is from outside of this forum
                              lucydev@wetdry.world
                              wrote sidst redigeret af
                              #23

                              @gabrielesvelto this is true, it's really strange seeing non-tech people around me talk abt LLMs as if it was a sentient being, it's kinda unsettling .

                              Still, we need to make sure there's no lack of responsibility for the operators or users of these programs. With the hole story about the blogpost generated autonomously using an LLM as a response to a FOSS-Maintainers AU-Policy, some people kinda forgot that there is some person responsible for setting it up that way and for letting the program loose.

                              1 Reply Last reply
                              0
                              • andres4ny@social.ridetrans.itA andres4ny@social.ridetrans.it

                                @gabrielesvelto @kinou Yeah, it's unclear how much of this is human-directed, and how much is automated. Like, if a bot is trained on aggressive attempts to get patches merged, then that's the behavior it will emulate. Or an actual human could be directing it to act like an asshole in an attempt to get patches merged.

                                lauerhahn@sfba.socialL This user is from outside of this forum
                                lauerhahn@sfba.socialL This user is from outside of this forum
                                lauerhahn@sfba.social
                                wrote sidst redigeret af
                                #24

                                @Andres4NY @gabrielesvelto @kinou well, but an LLM does not have "behavior" as a property. It is just programmed to match particular patterns of words. I think that's related to the distinction the OP is making.

                                1 Reply Last reply
                                0
                                • gabrielesvelto@mas.toG gabrielesvelto@mas.to

                                  @bit absolutely, and it gives people the impression that they have failure modes, which they don't. Their output is text which they cannot verify, so whether the text is factually right or wrong is irrelevant. Both are valid and completely expected outputs.

                                  lauerhahn@sfba.socialL This user is from outside of this forum
                                  lauerhahn@sfba.socialL This user is from outside of this forum
                                  lauerhahn@sfba.social
                                  wrote sidst redigeret af
                                  #25

                                  @gabrielesvelto @bit This! This really needs to be widely understood.

                                  1 Reply Last reply
                                  0
                                  • gabrielesvelto@mas.toG gabrielesvelto@mas.to

                                    Don't anthropomorphize LLMs, language is important. Say "the bot generated some text" not "the AI replied". Use "this document contains machine-generated text" not "this work is AI-assisted". See how people squirm when you call out their slop this way.

                                    antimundo@mastodon.gamedev.placeA This user is from outside of this forum
                                    antimundo@mastodon.gamedev.placeA This user is from outside of this forum
                                    antimundo@mastodon.gamedev.place
                                    wrote sidst redigeret af
                                    #26

                                    @gabrielesvelto the worst cases of this, is when people say "chatgpt said..." as if an AI could talk. Or "chatgpt thinks..." as if an AI could think.

                                    1 Reply Last reply
                                    0
                                    • jwcph@helvede.netJ jwcph@helvede.net shared this topic
                                    Svar
                                    • Svar som emne
                                    Login for at svare
                                    • Ældste til nyeste
                                    • Nyeste til ældste
                                    • Most Votes


                                    • Log ind

                                    • Har du ikke en konto? Tilmeld

                                    • Login or register to search.
                                    Powered by NodeBB Contributors
                                    Graciously hosted by data.coop
                                    • First post
                                      Last post
                                    0
                                    • Hjem
                                    • Seneste
                                    • Etiketter
                                    • Populære
                                    • Verden
                                    • Bruger
                                    • Grupper