Skip to content
  • Hjem
  • Seneste
  • Etiketter
  • Populære
  • Verden
  • Bruger
  • Grupper
Temaer
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Kollaps
FARVEL BIG TECH
  1. Forside
  2. Ikke-kategoriseret
  3. Free software people: A major goal of free software is for individuals to be able to cause software to behave in the way they want it toLLMs: (enable that)Free software people: Oh no not like that

Free software people: A major goal of free software is for individuals to be able to cause software to behave in the way they want it toLLMs: (enable that)Free software people: Oh no not like that

Planlagt Fastgjort Låst Flyttet Ikke-kategoriseret
317 Indlæg 120 Posters 0 Visninger
  • Ældste til nyeste
  • Nyeste til ældste
  • Most Votes
Svar
  • Svar som emne
Login for at svare
Denne tråd er blevet slettet. Kun brugere med emne behandlings privilegier kan se den.
  • mjg59@nondeterministic.computerM mjg59@nondeterministic.computer

    Clearly my most unpopular thread ever, so let me add a clarification: submitting LLM generated code you don't understand to an upstream project is absolute bullshit and you should never do that. Having an LLM turn an existing codebase into something that meets your local needs? Do it. The code may be awful, it may break stuff you don't care about, and that's what all my early patches to free software looked like. It's ok to solve your problem locally.

    alextecplayz@techhub.socialA This user is from outside of this forum
    alextecplayz@techhub.socialA This user is from outside of this forum
    alextecplayz@techhub.social
    wrote sidst redigeret af
    #125

    @mjg59 pretty much. I deeply dislike any PRs I see on various projects where the prompt was basically just something like "I want you to implement this major feature into this project", with no real understanding of the underlying code and whatnot.

    I would rather have coders that know what they're doing and that understand their codebases use LLMs than a random Joe Schmoe like those TikTok vibecoders with like 5 monitor screens, brainrotted on short-form content asking Claude to add E2EE to some project or to refactor the rendering process of a game engine or whatnot.

    These people are wasting the maintainers' time with a jumbled mess of AI code that assumes a few things and that likely breaks on the first try.

    ---

    There's nothing wrong with pulling a git repo and then vibe-coding a quick thing as a test or for your specific use case, but there's everything wrong with upstreaming that as a PR if you have no idea how the project's code even works or how it's architected, and with no tests or checks.

    1 Reply Last reply
    0
    • mjg59@nondeterministic.computerM mjg59@nondeterministic.computer

      Personally I'm not going to literally copy code from a codebase under an incompatible license because that is what the law says, but have I read proprietary code and learned the underlying creative aspect and then written new code that embodies it? Yes! Anyone claiming otherwise is lying!

      distrowatch@mastodon.socialD This user is from outside of this forum
      distrowatch@mastodon.socialD This user is from outside of this forum
      distrowatch@mastodon.social
      wrote sidst redigeret af
      #126

      @mjg59 This might be the dumbest thing you have written. You basically just said anyone who claims not to have committed copyright infringement is lying, which is both obviously false and insulting to developers.

      mjg59@nondeterministic.computerM 1 Reply Last reply
      0
      • mjg59@nondeterministic.computerM mjg59@nondeterministic.computer

        When I write code I am turning a creative idea into a mechanical embodiment of that idea. I am not creating beauty. Every line of code I write is a copy of another line of code I've read somewhere before, lightly modified to meet my needs. My code is not intended to evoke emotion. It does not change people think about the world. The idea→code pipeline in my head is not obviously distinguishable from the prompt->code process in an LLM

        distrowatch@mastodon.socialD This user is from outside of this forum
        distrowatch@mastodon.socialD This user is from outside of this forum
        distrowatch@mastodon.social
        wrote sidst redigeret af
        #127

        @mjg59 " Every line of code I write is a copy of another line of code I've read somewhere before." This cannot possibly be true. Surely you've written some original content, as a developer, which was unique or which created your own function, or did something you hadn't simply read before?

        Even if it is somehow true for you, it is not at all how most developers write code.

        mjg59@nondeterministic.computerM 1 Reply Last reply
        0
        • mjg59@nondeterministic.computerM mjg59@nondeterministic.computer

          Free software people: A major goal of free software is for individuals to be able to cause software to behave in the way they want it to
          LLMs: (enable that)
          Free software people: Oh no not like that

          karolherbst@chaos.socialK This user is from outside of this forum
          karolherbst@chaos.socialK This user is from outside of this forum
          karolherbst@chaos.social
          wrote sidst redigeret af
          #128

          @mjg59 Most of the discourse just shows why "the Linux community" is considered this elitist toxic cesspit by most non linux people people

          And it's wild, because many that consider them the good folks in this regard are also participating in this toxicity

          it's like being condescending and shaming others for their poor choices is seen as the normal thing to do

          1 Reply Last reply
          0
          • mjg59@nondeterministic.computerM mjg59@nondeterministic.computer

            When I write code I am turning a creative idea into a mechanical embodiment of that idea. I am not creating beauty. Every line of code I write is a copy of another line of code I've read somewhere before, lightly modified to meet my needs. My code is not intended to evoke emotion. It does not change people think about the world. The idea→code pipeline in my head is not obviously distinguishable from the prompt->code process in an LLM

            flacs@mastodon.socialF This user is from outside of this forum
            flacs@mastodon.socialF This user is from outside of this forum
            flacs@mastodon.social
            wrote sidst redigeret af
            #129

            @mjg59 this may be true for code I don't care about or need to deliver quickly, everything else definitely contains as much beauty as I am capable of

            1 Reply Last reply
            0
            • newhinton@troet.cafeN newhinton@troet.cafe

              @mnl @david_chisnall @mjg59 @ignaloidas

              even reading the first page.

              Generally, this assessment of the overall book extends to each page, even if it contains pages with errors.

              For llms, there is a probability that each query is resulting in garbage. In the book-analogy, it is as if each page is written by a different author, some experts, some crooks

              Except no page is attributed, and guessing who wrote what page is up to the reader.

              There is no model to be build around that fail-mode
              2/2

              mnl@hachyderm.ioM This user is from outside of this forum
              mnl@hachyderm.ioM This user is from outside of this forum
              mnl@hachyderm.io
              wrote sidst redigeret af
              #130

              @newhinton @david_chisnall @mjg59 @ignaloidas I’m not really following. using an llm doesn’t erase my brain the minute I use it, nor are is it a random number generator where you are forbidden to check the answers? These all hold for llms.

              ignaloidas@not.acu.ltI 1 Reply Last reply
              0
              • mjg59@nondeterministic.computerM mjg59@nondeterministic.computer

                Personally I'm not going to literally copy code from a codebase under an incompatible license because that is what the law says, but have I read proprietary code and learned the underlying creative aspect and then written new code that embodies it? Yes! Anyone claiming otherwise is lying!

                phooky@hexa.clubP This user is from outside of this forum
                phooky@hexa.clubP This user is from outside of this forum
                phooky@hexa.club
                wrote sidst redigeret af
                #131

                @mjg59 "i don't like programming and anyone who does is a liar" is a hill to die on, i guess

                mjg59@nondeterministic.computerM 1 Reply Last reply
                0
                • bananarama@mstdn.socialB bananarama@mstdn.social

                  @david_chisnall @mjg59 I suspect CHERI would make running LLM-generated code more feasible, and probably less risky. I'm not saying this to be an annoying contrarian, but rather that stronger underlying models seems to make playing with garbage LLM code more viable. Terry Tao has been using them to generate quick and dirty proofs, cha bu duo.

                  david_chisnall@infosec.exchangeD This user is from outside of this forum
                  david_chisnall@infosec.exchangeD This user is from outside of this forum
                  david_chisnall@infosec.exchange
                  wrote sidst redigeret af
                  #132

                  @bananarama @mjg59

                  It certainly can. As long as you are careful about the interfaces to the compartment, you can reason about the worst that can happen with the LLM-generated code. I see this as a special case of supply-chain attacks, which the CHERIoT compartmentalisation mode was designed to protect against: assume this code works for your test vectors and might be actively malicious in other cases, what's the worst that can happen? LLM's just let you bring the supply-chain attacks in house.

                  1 Reply Last reply
                  0
                  • mjg59@nondeterministic.computerM mjg59@nondeterministic.computer

                    Free software people: A major goal of free software is for individuals to be able to cause software to behave in the way they want it to
                    LLMs: (enable that)
                    Free software people: Oh no not like that

                    dch@bsd.networkD This user is from outside of this forum
                    dch@bsd.networkD This user is from outside of this forum
                    dch@bsd.network
                    wrote sidst redigeret af
                    #133

                    @mjg59 my 2 favourite single user LLM use cases are

                    for people who are physically immobile, to help them interact with others. Seeing how these tools can make them more able to engage with the world is heartening.

                    The other is my non tech musician friend who made a simple web page that ensures he plays all his tunes regularly but in random rotation. It hooks into google sheets and he slopped it all up by himself.

                    1 Reply Last reply
                    0
                    • mnl@hachyderm.ioM mnl@hachyderm.io

                      @david_chisnall @mjg59 @ignaloidas just like humans! Or books!

                      ced@mapstodon.spaceC This user is from outside of this forum
                      ced@mapstodon.spaceC This user is from outside of this forum
                      ced@mapstodon.space
                      wrote sidst redigeret af
                      #134

                      @mnl @david_chisnall @mjg59 @ignaloidas you don't pick humans nor books, randomly.

                      mnl@hachyderm.ioM 1 Reply Last reply
                      0
                      • troed@swecyb.comT troed@swecyb.com

                        @zacchiro I understood the ask I replied to was regarding ethical training. Mistral, as an EU company, has to abide by EU regulations AI companies in the US, China etc don't have to.

                        https://artificialintelligenceact.eu/article/53/

                        @chris_evelyn @mjg59

                        zacchiro@mastodon.xyzZ This user is from outside of this forum
                        zacchiro@mastodon.xyzZ This user is from outside of this forum
                        zacchiro@mastodon.xyz
                        wrote sidst redigeret af
                        #135

                        @troed I see. I don't know either what @chris_evelyn had in mind, so I'll leave it to them. But for what is worth the EU AI Act equally applies to all companies having access to the EU market. Mistral is not be special in that respect, unless the other players decide to leave the EU market (which is unlikely). @mjg59

                        1 Reply Last reply
                        0
                        • ced@mapstodon.spaceC ced@mapstodon.space

                          @mnl @david_chisnall @mjg59 @ignaloidas you don't pick humans nor books, randomly.

                          mnl@hachyderm.ioM This user is from outside of this forum
                          mnl@hachyderm.ioM This user is from outside of this forum
                          mnl@hachyderm.io
                          wrote sidst redigeret af
                          #136

                          @ced @david_chisnall @mjg59 @ignaloidas neither does an llm? We are perfectly able to deal with, say, search engine results, which are arguably more problematic than llms. For all intents and purposes, the books and resources I have at my disposal are also the product of random processes. I can still work with them to learn things.

                          ced@mapstodon.spaceC 1 Reply Last reply
                          0
                          • mjg59@nondeterministic.computerM mjg59@nondeterministic.computer

                            When I write code I am turning a creative idea into a mechanical embodiment of that idea. I am not creating beauty. Every line of code I write is a copy of another line of code I've read somewhere before, lightly modified to meet my needs. My code is not intended to evoke emotion. It does not change people think about the world. The idea→code pipeline in my head is not obviously distinguishable from the prompt->code process in an LLM

                            luatic@mastodon.socialL This user is from outside of this forum
                            luatic@mastodon.socialL This user is from outside of this forum
                            luatic@mastodon.social
                            wrote sidst redigeret af
                            #137

                            @mjg59

                            This is such a bullshit, deprecating framing of what developers do. The fact that you also deprecate yourself doesn't make it any better.

                            Sure, the individual "line of code" may not be very unique. But the arrangement of many lines is. Your comparison is about equivalent to saying "hah, how can an author produce anything novel if he's just using the same old words from the English alphabet!"

                            mjg59@nondeterministic.computerM 1 Reply Last reply
                            0
                            • mjg59@nondeterministic.computerM mjg59@nondeterministic.computer

                              Free software people: A major goal of free software is for individuals to be able to cause software to behave in the way they want it to
                              LLMs: (enable that)
                              Free software people: Oh no not like that

                              neintonine@social.iedsoftworks.comN This user is from outside of this forum
                              neintonine@social.iedsoftworks.comN This user is from outside of this forum
                              neintonine@social.iedsoftworks.com
                              wrote sidst redigeret af
                              #138

                              @mjg59@nondeterministic.computer If you want to use LLMs to make a software what you want, feel free to do it in a private forks. Private forks for yourself are fine. Private is private.
                              But its also the freedom of the developer/maintainer of the software to not allow such changes upstream or force such changes to be marked.

                              mjg59@nondeterministic.computerM 1 Reply Last reply
                              0
                              • mjg59@nondeterministic.computerM mjg59@nondeterministic.computer

                                Free software people: A major goal of free software is for individuals to be able to cause software to behave in the way they want it to
                                LLMs: (enable that)
                                Free software people: Oh no not like that

                                rogersm@mastodon.socialR This user is from outside of this forum
                                rogersm@mastodon.socialR This user is from outside of this forum
                                rogersm@mastodon.social
                                wrote sidst redigeret af
                                #139

                                @mjg59 I have some issues about using LLM, but the only one in the free software world is about license tainting: I’m not sure if the code generated by a LLM is public domain.

                                1 Reply Last reply
                                0
                                • mnl@hachyderm.ioM mnl@hachyderm.io

                                  @ced @david_chisnall @mjg59 @ignaloidas neither does an llm? We are perfectly able to deal with, say, search engine results, which are arguably more problematic than llms. For all intents and purposes, the books and resources I have at my disposal are also the product of random processes. I can still work with them to learn things.

                                  ced@mapstodon.spaceC This user is from outside of this forum
                                  ced@mapstodon.spaceC This user is from outside of this forum
                                  ced@mapstodon.space
                                  wrote sidst redigeret af
                                  #140

                                  @mnl @david_chisnall @mjg59 @ignaloidas well great for you. *I*'m not able to deal with random search results (especially now that they are often slop). And if your books were bought randomly, sure. Mine were selected because I trust the author, or because I know enough about the author bias to be able to correct it.

                                  mnl@hachyderm.ioM 1 Reply Last reply
                                  0
                                  • ced@mapstodon.spaceC ced@mapstodon.space

                                    @mnl @david_chisnall @mjg59 @ignaloidas well great for you. *I*'m not able to deal with random search results (especially now that they are often slop). And if your books were bought randomly, sure. Mine were selected because I trust the author, or because I know enough about the author bias to be able to correct it.

                                    mnl@hachyderm.ioM This user is from outside of this forum
                                    mnl@hachyderm.ioM This user is from outside of this forum
                                    mnl@hachyderm.io
                                    wrote sidst redigeret af
                                    #141

                                    @ced @david_chisnall @mjg59 @ignaloidas do you not use a search engine (genuinely curious, I love building search engines and making them work well)?

                                    Do you think it’s impossible to assign varying degrees of trust to llm output?

                                    ced@mapstodon.spaceC 1 Reply Last reply
                                    0
                                    • mjg59@nondeterministic.computerM mjg59@nondeterministic.computer

                                      Look, coders, we are not writers. There's no way to turn "increment this variable" into life changing prose. The creativity exists outside the code. It always has done and it always will do. Let it go.

                                      karolherbst@chaos.socialK This user is from outside of this forum
                                      karolherbst@chaos.socialK This user is from outside of this forum
                                      karolherbst@chaos.social
                                      wrote sidst redigeret af
                                      #142

                                      @mjg59 I think this understanding of art stems from a misunderstanding what art in itself is.

                                      Like of course writing code can be an artistic activity and trying to argue against is just shows a deep misunderstanding of those who see it that way.

                                      But "arts goal" isn't even to be life changing prose, most arts goal isn't even that at all. Most "classical" art was even seen as "just a craft".

                                      "beauty" can manifest in many ways, and self-expression through code is a thing.

                                      1 Reply Last reply
                                      0
                                      • mnl@hachyderm.ioM mnl@hachyderm.io

                                        @newhinton @david_chisnall @mjg59 @ignaloidas I’m not really following. using an llm doesn’t erase my brain the minute I use it, nor are is it a random number generator where you are forbidden to check the answers? These all hold for llms.

                                        ignaloidas@not.acu.ltI This user is from outside of this forum
                                        ignaloidas@not.acu.ltI This user is from outside of this forum
                                        ignaloidas@not.acu.lt
                                        wrote sidst redigeret af
                                        #143

                                        @mnl@hachyderm.io @newhinton@troet.cafe @david_chisnall@infosec.exchange @mjg59@nondeterministic.computer the difference is that you can gain trust that some author knows his stuff in a specific field and you no longer need to cross-check every single thing that they write.

                                        With an LLM no such trust can be developed, because fundamentally it's just rolling dice out of a modeled distribution, the fact that the LLM was right about something 9 previous times has no influence whether the next statement will be correct or wrong.

                                        It's these trust relationships that allow to work efficiently - cross checking everything every time is incredibly time consuming.

                                        mnl@hachyderm.ioM 1 Reply Last reply
                                        0
                                        • ignaloidas@not.acu.ltI ignaloidas@not.acu.lt

                                          @mnl@hachyderm.io @newhinton@troet.cafe @david_chisnall@infosec.exchange @mjg59@nondeterministic.computer the difference is that you can gain trust that some author knows his stuff in a specific field and you no longer need to cross-check every single thing that they write.

                                          With an LLM no such trust can be developed, because fundamentally it's just rolling dice out of a modeled distribution, the fact that the LLM was right about something 9 previous times has no influence whether the next statement will be correct or wrong.

                                          It's these trust relationships that allow to work efficiently - cross checking everything every time is incredibly time consuming.

                                          mnl@hachyderm.ioM This user is from outside of this forum
                                          mnl@hachyderm.ioM This user is from outside of this forum
                                          mnl@hachyderm.io
                                          wrote sidst redigeret af
                                          #144

                                          @ignaloidas @mjg59 @david_chisnall @newhinton that’s not how llms work though, it being right 9 times out of 10 very much has an influence on whether the 10th time will be correct. That’s literally how models are trained. There’s an entire research field out there that studies it.

                                          ignaloidas@not.acu.ltI 1 Reply Last reply
                                          0
                                          Svar
                                          • Svar som emne
                                          Login for at svare
                                          • Ældste til nyeste
                                          • Nyeste til ældste
                                          • Most Votes


                                          • Log ind

                                          • Har du ikke en konto? Tilmeld

                                          • Login or register to search.
                                          Powered by NodeBB Contributors
                                          Graciously hosted by data.coop
                                          • First post
                                            Last post
                                          0
                                          • Hjem
                                          • Seneste
                                          • Etiketter
                                          • Populære
                                          • Verden
                                          • Bruger
                                          • Grupper