Skip to content
  • Hjem
  • Seneste
  • Etiketter
  • Populære
  • Verden
  • Bruger
  • Grupper
Temaer
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Kollaps
FARVEL BIG TECH
  1. Forside
  2. Ikke-kategoriseret
  3. There seem to be two distinct kinds of “chatbot psychosis” happening right now:

There seem to be two distinct kinds of “chatbot psychosis” happening right now:

Planlagt Fastgjort Låst Flyttet Ikke-kategoriseret
llmslop
27 Indlæg 18 Posters 16 Visninger
  • Ældste til nyeste
  • Nyeste til ældste
  • Most Votes
Svar
  • Svar som emne
Login for at svare
Denne tråd er blevet slettet. Kun brugere med emne behandlings privilegier kan se den.
  • michaelgemar@mstdn.caM michaelgemar@mstdn.ca

    @eschaton Does #2 include CEOs, or is firing huge swathes of your staff and replacing them with AI a different type of psychosis?

    simonzerafa@infosec.exchangeS This user is from outside of this forum
    simonzerafa@infosec.exchangeS This user is from outside of this forum
    simonzerafa@infosec.exchange
    wrote sidst redigeret af
    #18

    @michaelgemar @eschaton

    That's almost a combination of Type 1 and Type 2, in that both together can lead to unrealistic and delusional levels of belief on how effective LLM model output can be 🙂

    Type 12 (combined psychosis) or Type 3? 🙂🤷‍♂️

    1 Reply Last reply
    0
    • eschaton@mastodon.socialE eschaton@mastodon.social

      Type 2 can be summed up as “How dare you presume to tell me whether I’m allowed to use an LLM if I want to?!” Just an absolutely incredible degree of entitlement.

      #ai #llm #slop

      janl@narrativ.esJ This user is from outside of this forum
      janl@narrativ.esJ This user is from outside of this forum
      janl@narrativ.es
      wrote sidst redigeret af
      #19

      @eschaton amen. Relatedly: https://narrativ.es/@janl/114566975034056419

      1 Reply Last reply
      0
      • eschaton@mastodon.socialE eschaton@mastodon.social

        There seem to be two distinct kinds of “chatbot psychosis” happening right now:

        1. Becoming delusional about themselves and the world as a result of being glazed nonstop by the friend in their computer, thinking they’re inventing new physics, discovering mystical secrets, etc. and becoming manic.

        2. Becoming delusional about what LLMs are capable of and how effective they are, as a result of developing a reliance upon them, and becoming fanatical in their promotion and defense.

        #ai #llm #slop

        ruenahcmohr@infosec.exchangeR This user is from outside of this forum
        ruenahcmohr@infosec.exchangeR This user is from outside of this forum
        ruenahcmohr@infosec.exchange
        wrote sidst redigeret af
        #20

        @eschaton which does "I have nobody to talk to but the ai" fit into?

        1 Reply Last reply
        0
        • paul@tapbots.socialP paul@tapbots.social

          @eschaton I’m curious if you think its all plagiarism or if some uses of LLMs are not? I asked it today to go look through some classes and add a define everywhere I was hardcoding a specific constant. I find it hard to accept that as plagiarism for any kind of definition of it that makes sense to me. Where doing “write a web browser" I'd imagine is going to just spew out a ton of other people's code.

          _ This user is from outside of this forum
          _ This user is from outside of this forum
          __d@mastodon.social
          wrote sidst redigeret af
          #21

          @paul @eschaton I like to imagine that instead of the LLM behind the prompt, there’s a person. Instead of paying Anthropic/whoever, I’m paying a human. All the generated code is written by the hidden person. All those constant values replaced by defines were written by the person behind the interface.

          Now, do I consider the result to be 100% my own work? I find that I cannot.

          ahltorp@mastodon.nuA 1 Reply Last reply
          0
          • eschaton@mastodon.socialE eschaton@mastodon.social

            As an example, see the incredible escalation in response to me saying that the output of an LLM does not represent a developer’s own work: https://news.ycombinator.com/item?id=47344155

            The slopmonger refuses to accept that what they’re doing meets the academic definition of plagiarism. Instead they insist that I must not understand LLMs and that I need to get out of the way and out of the industry because what they’re doing is the way of the future.

            #ai #llm #slop

            europlus@social.europlus.zoneE This user is from outside of this forum
            europlus@social.europlus.zoneE This user is from outside of this forum
            europlus@social.europlus.zone
            wrote sidst redigeret af
            #22

            @eschaton “you’re a stupid poo-poo head…Poo-poo Head 😝

            1 Reply Last reply
            0
            • _ __d@mastodon.social

              @paul @eschaton I like to imagine that instead of the LLM behind the prompt, there’s a person. Instead of paying Anthropic/whoever, I’m paying a human. All the generated code is written by the hidden person. All those constant values replaced by defines were written by the person behind the interface.

              Now, do I consider the result to be 100% my own work? I find that I cannot.

              ahltorp@mastodon.nuA This user is from outside of this forum
              ahltorp@mastodon.nuA This user is from outside of this forum
              ahltorp@mastodon.nu
              wrote sidst redigeret af
              #23

              @__d @paul @eschaton I also often use this "LLM as a person" way of looking at it, especially in academic settings when I try to explain plagiarism. As long as it is only used as one tool for explanation, and not the only one, I find that it works quite well.

              Some people don't even seem to understand that having someone else write it for you is plagiarism, though.

              1 Reply Last reply
              0
              • eschaton@mastodon.socialE eschaton@mastodon.social

                @michaelgemar It absolutely includes CEOs, CTOs, pundits, and the like. However it also includes the people who get extremely angry when an Open Source project says “no, we will not take your contribution to our project if you used an LLM to create it, because it’s not your work.” They can go to Dennis Reynolds levels of unbound rage almost instantly and it’s really something to see.

                abucci@buc.ciA This user is from outside of this forum
                abucci@buc.ciA This user is from outside of this forum
                abucci@buc.ci
                wrote sidst redigeret af
                #24
                @eschaton@mastodon.social @michaelgemar@mstdn.ca I think the anger response is at least partly explainable by this: https://buc.ci/abucci/p/1773412163.748396

                The CEO response may be totally explained by that...
                1 Reply Last reply
                0
                • michaelgemar@mstdn.caM michaelgemar@mstdn.ca

                  @eschaton Does #2 include CEOs, or is firing huge swathes of your staff and replacing them with AI a different type of psychosis?

                  abucci@buc.ciA This user is from outside of this forum
                  abucci@buc.ciA This user is from outside of this forum
                  abucci@buc.ci
                  wrote sidst redigeret af
                  #25
                  @michaelgemar@mstdn.ca For what it's worth, the majority of layoffs have been done for conventional economic reasons, or because companies (esp. tech companies) overhired near the beginning of the COVID pandemic. They are using AI as an excuse, hoping AI psychosis will distract from the otherwise-obvious conclusion that they made poor management decisions. @eschaton@mastodon.social
                  1 Reply Last reply
                  0
                  • eschaton@mastodon.socialE eschaton@mastodon.social

                    @michaelgemar It absolutely includes CEOs, CTOs, pundits, and the like. However it also includes the people who get extremely angry when an Open Source project says “no, we will not take your contribution to our project if you used an LLM to create it, because it’s not your work.” They can go to Dennis Reynolds levels of unbound rage almost instantly and it’s really something to see.

                    murodegrizeco@toad.socialM This user is from outside of this forum
                    murodegrizeco@toad.socialM This user is from outside of this forum
                    murodegrizeco@toad.social
                    wrote sidst redigeret af
                    #26

                    @eschaton

                    Suddenly, I think of Lindsey Graham raging during the US Senate hearing for the drunk rapist Supreme Court nominee, Kavanaugh!

                    Performative rage. Performative outrage.

                    Seems like a generalized tell, for detecting narrative warfare actors. Always dialling it to 11...

                    1 Reply Last reply
                    0
                    • eschaton@mastodon.socialE eschaton@mastodon.social

                      There seem to be two distinct kinds of “chatbot psychosis” happening right now:

                      1. Becoming delusional about themselves and the world as a result of being glazed nonstop by the friend in their computer, thinking they’re inventing new physics, discovering mystical secrets, etc. and becoming manic.

                      2. Becoming delusional about what LLMs are capable of and how effective they are, as a result of developing a reliance upon them, and becoming fanatical in their promotion and defense.

                      #ai #llm #slop

                      nielsa@mas.toN This user is from outside of this forum
                      nielsa@mas.toN This user is from outside of this forum
                      nielsa@mas.to
                      wrote sidst redigeret af
                      #27

                      @eschaton Yeah—but I don't really think the analogy of "psychosis" works for the latter term. Delusion, sure.

                      1 Reply Last reply
                      0
                      • jwcph@helvede.netJ jwcph@helvede.net shared this topic
                      Svar
                      • Svar som emne
                      Login for at svare
                      • Ældste til nyeste
                      • Nyeste til ældste
                      • Most Votes


                      • Log ind

                      • Har du ikke en konto? Tilmeld

                      • Login or register to search.
                      Powered by NodeBB Contributors
                      Graciously hosted by data.coop
                      • First post
                        Last post
                      0
                      • Hjem
                      • Seneste
                      • Etiketter
                      • Populære
                      • Verden
                      • Bruger
                      • Grupper