Skip to content
  • Hjem
  • Seneste
  • Etiketter
  • Populære
  • Verden
  • Bruger
  • Grupper
Temaer
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Kollaps
FARVEL BIG TECH
  1. Forside
  2. Ikke-kategoriseret
  3. People get mad when you call LLMs "spicy autocomplete" but my investigations into recreating and implementing small versions of this tech make me think that nick name is very accurate.

People get mad when you call LLMs "spicy autocomplete" but my investigations into recreating and implementing small versions of this tech make me think that nick name is very accurate.

Planlagt Fastgjort Låst Flyttet Ikke-kategoriseret
33 Indlæg 17 Posters 36 Visninger
  • Ældste til nyeste
  • Nyeste til ældste
  • Most Votes
Svar
  • Svar som emne
Login for at svare
Denne tråd er blevet slettet. Kun brugere med emne behandlings privilegier kan se den.
  • futurebird@sauropods.winF futurebird@sauropods.win

    For example if I give an LLM without user seperation this text:

    "It's a lovely day." It might continue with "The sun was shining."

    But with user separation it focuses on responses to "it was a lovely day" from other users and the training data might suggest "I agree, it's wonderful weather."

    So interaction with an LLM is like posting on a forum, it gives you and average of typical responses with one small change: most LLMs have a strong positivity bias programmed in.

    futurebird@sauropods.winF This user is from outside of this forum
    futurebird@sauropods.winF This user is from outside of this forum
    futurebird@sauropods.win
    wrote sidst redigeret af
    #6

    Because let's be real, if you posted "It's a lovely day." on an internet forum you might get a response like "No it's not, noob."

    LLMs are heavily weighted to give supportive, and constructive responses.

    I wonder what they might be like without these limitations? Without the limitation to make the response from another user they might be much less deceptive.

    That they are popular shows that many people just want a nice moderated online community where people treat each other with respect.

    1 Reply Last reply
    0
    • futurebird@sauropods.winF futurebird@sauropods.win

      People get mad when you call LLMs "spicy autocomplete" but my investigations into recreating and implementing small versions of this tech make me think that nick name is very accurate.

      Basically, it's a method to predict the next content in a text file. The whole conversation between you and the LLM is one file, and the LLM tries to find the most likely next text based on the training data.

      There is something significant here: LLMs were trained on internet forums and social media.

      jenesuispasgoth@pouet.chapril.orgJ This user is from outside of this forum
      jenesuispasgoth@pouet.chapril.orgJ This user is from outside of this forum
      jenesuispasgoth@pouet.chapril.org
      wrote sidst redigeret af
      #7

      @futurebird this is also literally what actual researchers in machine learning describe. You're absolutely correct in your assessment.

      1 Reply Last reply
      0
      • futurebird@sauropods.winF This user is from outside of this forum
        futurebird@sauropods.winF This user is from outside of this forum
        futurebird@sauropods.win
        wrote sidst redigeret af
        #8

        @bri7

        The people who said "no calling it spicy autocomplete misses the whole point there is more going on!"

        Really made me think that maybe there was more going on, and of course they'd never say. But, it's just the rather clever exploit of using the way that so much of the training data was in the form of posts and responses to make the auto completer feel more like a conversation.

        That is the "more" that is going on.

        1 Reply Last reply
        0
        • woe2you@beige.partyW woe2you@beige.party

          @futurebird Which is weird, because you'd think "You're absolutely right" wouldn't be anywhere in their lexicon.

          futurebird@sauropods.winF This user is from outside of this forum
          futurebird@sauropods.winF This user is from outside of this forum
          futurebird@sauropods.win
          wrote sidst redigeret af
          #9

          @woe2you

          Well isn't that what we all long to hear when we post online? Here is a program that will ALWAYS do that. No trolls, not prickly experts pointing our your spelling errors, no people who are right and trying to tell you why you are wrong with more patience than you deserve.

          I do think there is a lesson here. You can get a long way with people just by not being a jerk.

          It's one of the reasons I like the fedi. People will say when you are wrong but they are nice about it... mostly.

          technothrasher@universeodon.comT 1 Reply Last reply
          0
          • futurebird@sauropods.winF futurebird@sauropods.win

            People get mad when you call LLMs "spicy autocomplete" but my investigations into recreating and implementing small versions of this tech make me think that nick name is very accurate.

            Basically, it's a method to predict the next content in a text file. The whole conversation between you and the LLM is one file, and the LLM tries to find the most likely next text based on the training data.

            There is something significant here: LLMs were trained on internet forums and social media.

            c0dec0dec0de@hachyderm.ioC This user is from outside of this forum
            c0dec0dec0de@hachyderm.ioC This user is from outside of this forum
            c0dec0dec0de@hachyderm.io
            wrote sidst redigeret af
            #10

            @futurebird maybe the “spicy” part is really what’s wrong with it, given the outputs are so bland

            1 Reply Last reply
            0
            • neckspike@indiepocalypse.socialN neckspike@indiepocalypse.social

              @futurebird
              it's so similar to the Markov chain generators we used to play with on IRC, just backed with obscene amounts of compute and fancily parsed data instead of a text file and some random spare cycles

              maybenot@mstdn.socialM This user is from outside of this forum
              maybenot@mstdn.socialM This user is from outside of this forum
              maybenot@mstdn.social
              wrote sidst redigeret af
              #11

              @neckspike @futurebird

              "Big Markov",
              "Deep Markov"
              or, terrifyingly
              "General Purpose Markov" ?

              hypolite@friendica.mrpetovan.comH 1 Reply Last reply
              0
              • futurebird@sauropods.winF futurebird@sauropods.win

                People get mad when you call LLMs "spicy autocomplete" but my investigations into recreating and implementing small versions of this tech make me think that nick name is very accurate.

                Basically, it's a method to predict the next content in a text file. The whole conversation between you and the LLM is one file, and the LLM tries to find the most likely next text based on the training data.

                There is something significant here: LLMs were trained on internet forums and social media.

                gullevek@famichiki.jpG This user is from outside of this forum
                gullevek@famichiki.jpG This user is from outside of this forum
                gullevek@famichiki.jp
                wrote sidst redigeret af
                #12

                @futurebird @jannem Yup. It’s a weighted random generator auto complete sold to idiots that it will replace the whole workforce and make those idiots bazillions while also taking away peoples possibility to actually purchase hardware they own. It’s an auto complete enslavement device

                1 Reply Last reply
                0
                • futurebird@sauropods.winF This user is from outside of this forum
                  futurebird@sauropods.winF This user is from outside of this forum
                  futurebird@sauropods.win
                  wrote sidst redigeret af
                  #13

                  @u0421793

                  Ian, you had me going for a moment there. I was like "how do they keep finding me? why are they like this all the time???"

                  😆

                  u0421793@toot.pikopublish.ingU 1 Reply Last reply
                  0
                  • futurebird@sauropods.winF This user is from outside of this forum
                    futurebird@sauropods.winF This user is from outside of this forum
                    futurebird@sauropods.win
                    wrote sidst redigeret af
                    #14

                    @u0421793

                    The wind up with the bamboozling jargon (you can feel these dudes hoping they put in enough tricky sounding word and concepts to make you just give up) was perfect.

                    "token prediction"
                    "vectors"
                    "gradient descent" (OMG)

                    The problem is math jargon is my briar patch and tossing me in there is a big mistake.

                    🙂

                    u0421793@toot.pikopublish.ingU 1 Reply Last reply
                    0
                    • futurebird@sauropods.winF futurebird@sauropods.win

                      @u0421793

                      Ian, you had me going for a moment there. I was like "how do they keep finding me? why are they like this all the time???"

                      😆

                      u0421793@toot.pikopublish.ingU This user is from outside of this forum
                      u0421793@toot.pikopublish.ingU This user is from outside of this forum
                      u0421793@toot.pikopublish.ing
                      wrote sidst redigeret af
                      #15

                      @futurebird@sauropods.win I honestly think (unpopular opinion here) that most of the cost of LLM-based AI thus far is in ‘training’. Not training as in running the phenomenal amount of harvested stolen text and image input through tokenisation processes and reward giving through weight assignment and vector assessment, using more GPUs than exist on Earth, but rather, lots and lots and lots of money paying humans to fake it all and build in patches – patch after patch on top of patch of corrective behaviour, encoded themselves as vector weights. The training had nothing much to do with running it all through GPUs, I believe that probably took an embarassing but totally affordable amount of time and energy. I believe (with no visible means of factual reference to cite) that most of the expenditure of these capital-burning companies was ‘training’ by paying humans and then encoding their resulting guidance. Paying workers.

                      1 Reply Last reply
                      0
                      • wakame@tech.lgbtW This user is from outside of this forum
                        wakame@tech.lgbtW This user is from outside of this forum
                        wakame@tech.lgbt
                        wrote sidst redigeret af
                        #16

                        @bri7 @futurebird

                        [Not arguging that these models are 'thinking', even if it might sound like that.]

                        I think the "explain how you arrived at that conclusion" that was all the rage is very interesting for two reasons:

                        1. The modell is generating more text. It's not like it is showing you a walk through its model and the random numbers it pulled. So it is basically generating an explanation that is plausible given what the said before.

                        2. I think this is often also a behavior with humans. My opinion about a topic might be a gut feeling, but when questioned I start thinking about it, trying to find arguments. Often ones I didn't already have when I stated my opinion.

                        The first thing could make sense to ask if amodel are trained to change their position given new information. So they could "correct" a bad roll of the dice.
                        Of course, a user might think that the model "really thought about this", which is obviously not the case.

                        1 Reply Last reply
                        0
                        • futurebird@sauropods.winF futurebird@sauropods.win

                          @u0421793

                          The wind up with the bamboozling jargon (you can feel these dudes hoping they put in enough tricky sounding word and concepts to make you just give up) was perfect.

                          "token prediction"
                          "vectors"
                          "gradient descent" (OMG)

                          The problem is math jargon is my briar patch and tossing me in there is a big mistake.

                          🙂

                          u0421793@toot.pikopublish.ingU This user is from outside of this forum
                          u0421793@toot.pikopublish.ingU This user is from outside of this forum
                          u0421793@toot.pikopublish.ing
                          wrote sidst redigeret af
                          #17

                          @futurebird@sauropods.win I wasn’t making it up though - the way it works is by tokenising language (not into words but into fragments of words), then assigning the word-derived tokens to vectors (word2vec - it exists), then these vectors are winnowed into the likely winners by gradient descent to find the lowest error (and not get trapped by just falling downhill down the nearest valley) and so on.

                          1 Reply Last reply
                          0
                          • futurebird@sauropods.winF futurebird@sauropods.win

                            @woe2you

                            Well isn't that what we all long to hear when we post online? Here is a program that will ALWAYS do that. No trolls, not prickly experts pointing our your spelling errors, no people who are right and trying to tell you why you are wrong with more patience than you deserve.

                            I do think there is a lesson here. You can get a long way with people just by not being a jerk.

                            It's one of the reasons I like the fedi. People will say when you are wrong but they are nice about it... mostly.

                            technothrasher@universeodon.comT This user is from outside of this forum
                            technothrasher@universeodon.comT This user is from outside of this forum
                            technothrasher@universeodon.com
                            wrote sidst redigeret af
                            #18

                            @futurebird @woe2you

                            "It's one of the reasons I like the fedi. People will say when you are wrong but they are nice about it... mostly"

                            I find this unfortunately being less and less so as more people discover it. Lately I've seen way more flaming and obnoxious argumentativeness than ever previously. Sigh.

                            futurebird@sauropods.winF 1 Reply Last reply
                            0
                            • technothrasher@universeodon.comT technothrasher@universeodon.com

                              @futurebird @woe2you

                              "It's one of the reasons I like the fedi. People will say when you are wrong but they are nice about it... mostly"

                              I find this unfortunately being less and less so as more people discover it. Lately I've seen way more flaming and obnoxious argumentativeness than ever previously. Sigh.

                              futurebird@sauropods.winF This user is from outside of this forum
                              futurebird@sauropods.winF This user is from outside of this forum
                              futurebird@sauropods.win
                              wrote sidst redigeret af
                              #19

                              @technothrasher @woe2you

                              That sucks. Was it someone you knew acting differently or new people showing up? I hardly ever see any real drama so I'm kind of curious in an shallow gossip driven way what was going down...

                              technothrasher@universeodon.comT 1 Reply Last reply
                              0
                              • futurebird@sauropods.winF futurebird@sauropods.win

                                @technothrasher @woe2you

                                That sucks. Was it someone you knew acting differently or new people showing up? I hardly ever see any real drama so I'm kind of curious in an shallow gossip driven way what was going down...

                                technothrasher@universeodon.comT This user is from outside of this forum
                                technothrasher@universeodon.comT This user is from outside of this forum
                                technothrasher@universeodon.com
                                wrote sidst redigeret af
                                #20

                                @futurebird @woe2you I stick mostly to animal photography on my timeline, which still seems friendly and unaffected. But when looking at trending posts, so seeing things I wouldn't normally see, there tends to be more arguing. Unsurprisingly, it's usually political posts, which are always going to raise emotions, but people used to at least argue constructively. Now it seems to be a lot of yelling and swearing. Not everywhere, but more often.

                                1 Reply Last reply
                                0
                                • maybenot@mstdn.socialM maybenot@mstdn.social

                                  @neckspike @futurebird

                                  "Big Markov",
                                  "Deep Markov"
                                  or, terrifyingly
                                  "General Purpose Markov" ?

                                  hypolite@friendica.mrpetovan.comH This user is from outside of this forum
                                  hypolite@friendica.mrpetovan.comH This user is from outside of this forum
                                  hypolite@friendica.mrpetovan.com
                                  wrote sidst redigeret af
                                  #21
                                  @maybenot @neckspike @futurebird General Markov 🫡
                                  1 Reply Last reply
                                  0
                                  • futurebird@sauropods.winF futurebird@sauropods.win

                                    People get mad when you call LLMs "spicy autocomplete" but my investigations into recreating and implementing small versions of this tech make me think that nick name is very accurate.

                                    Basically, it's a method to predict the next content in a text file. The whole conversation between you and the LLM is one file, and the LLM tries to find the most likely next text based on the training data.

                                    There is something significant here: LLMs were trained on internet forums and social media.

                                    liiwi@mastodon.socialL This user is from outside of this forum
                                    liiwi@mastodon.socialL This user is from outside of this forum
                                    liiwi@mastodon.social
                                    wrote sidst redigeret af
                                    #22

                                    @futurebird Someone recently used term "Augmenting Intelligence" and I thought it describes much better.

                                    futurebird@sauropods.winF raymaccarthy@mastodon.ieR 2 Replies Last reply
                                    0
                                    • liiwi@mastodon.socialL liiwi@mastodon.social

                                      @futurebird Someone recently used term "Augmenting Intelligence" and I thought it describes much better.

                                      futurebird@sauropods.winF This user is from outside of this forum
                                      futurebird@sauropods.winF This user is from outside of this forum
                                      futurebird@sauropods.win
                                      wrote sidst redigeret af
                                      #23

                                      @liiwi

                                      It kind of implies something intelligent rather than probabilistic is going on though.

                                      If I have a hat filled with quotations of wisdom and I pull one out and read it now and then some of the time it will align with what is going on and seem very perceptive.

                                      If I have three hats with such quotes and they are labeled "good" "bad" and "cryptic" and I pick one based on the mood people might think I'm a genius.

                                      raganwald@social.bau-ha.usR djsumdog@djsumdog.comD liiwi@mastodon.socialL 3 Replies Last reply
                                      0
                                      • futurebird@sauropods.winF futurebird@sauropods.win

                                        @liiwi

                                        It kind of implies something intelligent rather than probabilistic is going on though.

                                        If I have a hat filled with quotations of wisdom and I pull one out and read it now and then some of the time it will align with what is going on and seem very perceptive.

                                        If I have three hats with such quotes and they are labeled "good" "bad" and "cryptic" and I pick one based on the mood people might think I'm a genius.

                                        raganwald@social.bau-ha.usR This user is from outside of this forum
                                        raganwald@social.bau-ha.usR This user is from outside of this forum
                                        raganwald@social.bau-ha.us
                                        wrote sidst redigeret af
                                        #24

                                        @futurebird Very, very similar to magicians and unethical grifters performing cold reads.

                                        @liiwi

                                        1 Reply Last reply
                                        0
                                        • futurebird@sauropods.winF futurebird@sauropods.win

                                          @liiwi

                                          It kind of implies something intelligent rather than probabilistic is going on though.

                                          If I have a hat filled with quotations of wisdom and I pull one out and read it now and then some of the time it will align with what is going on and seem very perceptive.

                                          If I have three hats with such quotes and they are labeled "good" "bad" and "cryptic" and I pick one based on the mood people might think I'm a genius.

                                          djsumdog@djsumdog.comD This user is from outside of this forum
                                          djsumdog@djsumdog.comD This user is from outside of this forum
                                          djsumdog@djsumdog.com
                                          wrote sidst redigeret af
                                          #25
                                          I call them "Weighted Random Word [or Code] Machines." I have a friend who said he wasn't going to continue the conversation if I was using "slurs." I called him a Cogger Lover.
                                          1 Reply Last reply
                                          0
                                          Svar
                                          • Svar som emne
                                          Login for at svare
                                          • Ældste til nyeste
                                          • Nyeste til ældste
                                          • Most Votes


                                          • Log ind

                                          • Har du ikke en konto? Tilmeld

                                          • Login or register to search.
                                          Powered by NodeBB Contributors
                                          Graciously hosted by data.coop
                                          • First post
                                            Last post
                                          0
                                          • Hjem
                                          • Seneste
                                          • Etiketter
                                          • Populære
                                          • Verden
                                          • Bruger
                                          • Grupper