Skip to content
  • Hjem
  • Seneste
  • Etiketter
  • Populære
  • Verden
  • Bruger
  • Grupper
Temaer
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Kollaps
FARVEL BIG TECH
  1. Forside
  2. Ikke-kategoriseret
  3. Free software people: A major goal of free software is for individuals to be able to cause software to behave in the way they want it toLLMs: (enable that)Free software people: Oh no not like that

Free software people: A major goal of free software is for individuals to be able to cause software to behave in the way they want it toLLMs: (enable that)Free software people: Oh no not like that

Planlagt Fastgjort Låst Flyttet Ikke-kategoriseret
317 Indlæg 120 Posters 0 Visninger
  • Ældste til nyeste
  • Nyeste til ældste
  • Most Votes
Svar
  • Svar som emne
Login for at svare
Denne tråd er blevet slettet. Kun brugere med emne behandlings privilegier kan se den.
  • mjg59@nondeterministic.computerM mjg59@nondeterministic.computer

    Clearly my most unpopular thread ever, so let me add a clarification: submitting LLM generated code you don't understand to an upstream project is absolute bullshit and you should never do that. Having an LLM turn an existing codebase into something that meets your local needs? Do it. The code may be awful, it may break stuff you don't care about, and that's what all my early patches to free software looked like. It's ok to solve your problem locally.

    T This user is from outside of this forum
    T This user is from outside of this forum
    trademark@fosstodon.org
    wrote sidst redigeret af
    #76

    @mjg59 The LLM-hate reminds me of the backlash against computers themselves. People insisted they were 100% worthless because someone got a bill for $0, and then a notice they were in arrears when it was not paid. Many projects either failed outright or people had to do their work twice, first the old pen and paper way which worked, and then also put it into the computer never to be seen again...

    1 Reply Last reply
    0
    • mjg59@nondeterministic.computerM mjg59@nondeterministic.computer

      Free software people: A major goal of free software is for individuals to be able to cause software to behave in the way they want it to
      LLMs: (enable that)
      Free software people: Oh no not like that

      david_chisnall@infosec.exchangeD This user is from outside of this forum
      david_chisnall@infosec.exchangeD This user is from outside of this forum
      david_chisnall@infosec.exchange
      wrote sidst redigeret af
      #77

      @mjg59

      I’ve heard this argument before and I disagree with it. My goal for Free Software is to enable users, but that requires users have agency. Users being able to modify code to do what they want? Great! Users being given a black box that will modify their code in a way that might do what they want but will fail in unpredictable ways, without giving them any mechanism to build a mental model of those failure modes? Terrible!

      I am not a carpenter but I have an electric screwdriver. It’s great. It lets me turn screws with much less effort than a manual one. There are a bunch of places where it doesn’t work, but that’s fine, I can understand those and use the harder-to-use tool in places where it won’t work. I can build a mental model of when not to use it and why it doesn’t work and how it will fail. I love building the software equivalent of this, things that let end users change code in ways I didn’t anticipate.

      But LLM coding is not like this. It’s like a nail gun that has a 1% chance of firing backwards. 99% of the time, it’s much easier than using a hammer. 1% of the time you lose an eye. And you have no way of knowing which it will be. The same prompt, given to the same model, two days in a row, may give you a program that does what you want one time and a program that looks like it does what you want but silently corrupts your data the next time.

      That’s not empowering users, that’s removing agency from users. Tools that empower users are ones that make it easy for users to build a (nicely abstracted, ignoring details that are irrelevant to them) mental model of how the system works and therefor the ability to change it in precise ways. Tools that remove agency from users take their ability to reason about how systems work and how to effect precise change.

      I have zero interest in enabling tools that remove agency from users.

      mnl@hachyderm.ioM bananarama@mstdn.socialB moses_izumi@fe.disroot.orgM golemwire@fosstodon.orgG 4 Replies Last reply
      0
      • C This user is from outside of this forum
        C This user is from outside of this forum
        ck@chaos.social
        wrote sidst redigeret af
        #78

        @jenesuispasgoth @mjg59 This is not AI endorsement, but given a sufficiently large problem / codebase, I would wager you wouldn't get a reliably identical result from having a human write code for the same problem twice either.
        We expect determinism from LLMs because "its computers", not because its necessary for good results.

        1 Reply Last reply
        0
        • dgold@goblin.technologyD dgold@goblin.technology

          @mjg59 strictly local needs, you do you.

          If using a giant model like Claude, you might want to consider what remodelling that code will cost the planet in terms of direct carbon output, electricity generation, water pollution, amortised environmental cost of building the Pollution Centres and the ongoing damage to local communities of the Pollution Centres.

          If you can live with all that? Sure, use your magic auto complete. Just don't expect others to not judge you for it. Not saying I would, btw, but that's the argument .

          seachaint@masto.hackers.townS This user is from outside of this forum
          seachaint@masto.hackers.townS This user is from outside of this forum
          seachaint@masto.hackers.town
          wrote sidst redigeret af
          #79

          @dgold @mjg59 Yea, but, like.. A functioning biosphere, human rights, a functioning democracy, respect for small peoples' creative rights, and code quality just aren't relevant to my specific use case

          1 Reply Last reply
          0
          • mjg59@nondeterministic.computerM mjg59@nondeterministic.computer

            Look, coders, we are not writers. There's no way to turn "increment this variable" into life changing prose. The creativity exists outside the code. It always has done and it always will do. Let it go.

            mrberard@mastodon.acm.orgM This user is from outside of this forum
            mrberard@mastodon.acm.orgM This user is from outside of this forum
            mrberard@mastodon.acm.org
            wrote sidst redigeret af
            #80

            @mjg59

            They do speak of 'elegance' even 'beauty' when it comes to mathematical proofs.

            Aesthetics are not a positivist axiology. Beauty is famously in the eye of the beholder.

            Just because you are aware you write ugly code doesn't mean code cannot be beautiful, or that aesthetics are not a legitimate field of assessing information systems.

            mjg59@nondeterministic.computerM 1 Reply Last reply
            0
            • mjg59@nondeterministic.computerM mjg59@nondeterministic.computer

              @dekkzz78 I am absolutely not going to argue that LLMs replace the need for skilled developers! But many people who want to modify software are just doing it for personal use and if we argue using LLMs for that is unethical we risk alienating them all

              rogerbw@discordian.socialR This user is from outside of this forum
              rogerbw@discordian.socialR This user is from outside of this forum
              rogerbw@discordian.social
              wrote sidst redigeret af
              #81

              @mjg59 @dekkzz78 People who have decided that they don't need to learn how to program are not people whose recycled code I want in my projects or on my computers.

              mjg59@nondeterministic.computerM 1 Reply Last reply
              0
              • mjg59@nondeterministic.computerM mjg59@nondeterministic.computer

                @barubary given my history, if your immediate conclusion is that I'm not presenting an honest opinion then I think you have a fundamental misunderstanding of who I am

                barubary@infosec.exchangeB This user is from outside of this forum
                barubary@infosec.exchangeB This user is from outside of this forum
                barubary@infosec.exchange
                wrote sidst redigeret af
                #82

                @mjg59 No, I do think you're being honest, I just think your opinion is kinda bad.

                mjg59@nondeterministic.computerM 1 Reply Last reply
                0
                • mjg59@nondeterministic.computerM mjg59@nondeterministic.computer

                  When I write code I am turning a creative idea into a mechanical embodiment of that idea. I am not creating beauty. Every line of code I write is a copy of another line of code I've read somewhere before, lightly modified to meet my needs. My code is not intended to evoke emotion. It does not change people think about the world. The idea→code pipeline in my head is not obviously distinguishable from the prompt->code process in an LLM

                  whitequark@social.treehouse.systemsW This user is from outside of this forum
                  whitequark@social.treehouse.systemsW This user is from outside of this forum
                  whitequark@social.treehouse.systems
                  wrote sidst redigeret af
                  #83

                  @mjg59 skill issue tbh

                  i barely write code for other reasons

                  1 Reply Last reply
                  0
                  • david_chisnall@infosec.exchangeD david_chisnall@infosec.exchange

                    @mjg59

                    I’ve heard this argument before and I disagree with it. My goal for Free Software is to enable users, but that requires users have agency. Users being able to modify code to do what they want? Great! Users being given a black box that will modify their code in a way that might do what they want but will fail in unpredictable ways, without giving them any mechanism to build a mental model of those failure modes? Terrible!

                    I am not a carpenter but I have an electric screwdriver. It’s great. It lets me turn screws with much less effort than a manual one. There are a bunch of places where it doesn’t work, but that’s fine, I can understand those and use the harder-to-use tool in places where it won’t work. I can build a mental model of when not to use it and why it doesn’t work and how it will fail. I love building the software equivalent of this, things that let end users change code in ways I didn’t anticipate.

                    But LLM coding is not like this. It’s like a nail gun that has a 1% chance of firing backwards. 99% of the time, it’s much easier than using a hammer. 1% of the time you lose an eye. And you have no way of knowing which it will be. The same prompt, given to the same model, two days in a row, may give you a program that does what you want one time and a program that looks like it does what you want but silently corrupts your data the next time.

                    That’s not empowering users, that’s removing agency from users. Tools that empower users are ones that make it easy for users to build a (nicely abstracted, ignoring details that are irrelevant to them) mental model of how the system works and therefor the ability to change it in precise ways. Tools that remove agency from users take their ability to reason about how systems work and how to effect precise change.

                    I have zero interest in enabling tools that remove agency from users.

                    mnl@hachyderm.ioM This user is from outside of this forum
                    mnl@hachyderm.ioM This user is from outside of this forum
                    mnl@hachyderm.io
                    wrote sidst redigeret af
                    #84

                    @david_chisnall @mjg59 @ignaloidas llms can be used to explain and learn things. Unsurprisingly, that’s what many people do when things don’t work, be they written by a human or not, and they want them to work.

                    david_chisnall@infosec.exchangeD 1 Reply Last reply
                    0
                    • promovicz@chaos.socialP promovicz@chaos.social

                      @mjg59 What you propose is actually illegal, even if the law doesn’t make much sense. I wonder if you ever had the cops sent after you on a corp-run IP case… maybe it would make you feel different?

                      C This user is from outside of this forum
                      C This user is from outside of this forum
                      ck@chaos.social
                      wrote sidst redigeret af
                      #85

                      @promovicz
                      That completely oversimplifies what's being discussed here. Every math book you ever studied is copyright, that does not mean you cannot use what you learned to solve math problems.

                      @mjg59

                      promovicz@chaos.socialP 1 Reply Last reply
                      0
                      • mjg59@nondeterministic.computerM mjg59@nondeterministic.computer

                        Clearly my most unpopular thread ever, so let me add a clarification: submitting LLM generated code you don't understand to an upstream project is absolute bullshit and you should never do that. Having an LLM turn an existing codebase into something that meets your local needs? Do it. The code may be awful, it may break stuff you don't care about, and that's what all my early patches to free software looked like. It's ok to solve your problem locally.

                        valpackett@social.treehouse.systemsV This user is from outside of this forum
                        valpackett@social.treehouse.systemsV This user is from outside of this forum
                        valpackett@social.treehouse.systems
                        wrote sidst redigeret af
                        #86

                        @mjg59 heh, one of the new ideas in a project I'm doing virtualization work for is to have a fully local LLM generate bespoke apps and instantly summon them directly on the desktop.

                        I don't think current local LLMs are actually "ethical" either, all my "fuck that entire industry" concerns are always present, and personally I wouldn't like using straight up fuzzy statistically magically inferred apps at all. But I do like the idea of empowering people to locally just do bespoke things like that, as long as there's always a big disclaimer about it being made that way and so on.

                        1 Reply Last reply
                        0
                        • mjg59@nondeterministic.computerM mjg59@nondeterministic.computer

                          @barnoid Huh interesting, that's really not my experience of writing code - I sit down with a formed idea of what needs to happen and then I smash keys until it's there. And now I'm curious whether there's a real disconnect between with different models of coding.

                          barnoid@mastodon.me.ukB This user is from outside of this forum
                          barnoid@mastodon.me.ukB This user is from outside of this forum
                          barnoid@mastodon.me.uk
                          wrote sidst redigeret af
                          #87

                          @mjg59 You never realise the original idea could be improved a bit along the way? This probably depends on what's being worked on. Most of the stuff I do is fairly small scale and not particularly well specified (day job is mostly sysadmin, off day jobs are museum installations).

                          mjg59@nondeterministic.computerM 1 Reply Last reply
                          0
                          • mnl@hachyderm.ioM mnl@hachyderm.io

                            @david_chisnall @mjg59 @ignaloidas llms can be used to explain and learn things. Unsurprisingly, that’s what many people do when things don’t work, be they written by a human or not, and they want them to work.

                            david_chisnall@infosec.exchangeD This user is from outside of this forum
                            david_chisnall@infosec.exchangeD This user is from outside of this forum
                            david_chisnall@infosec.exchange
                            wrote sidst redigeret af
                            #88

                            @mnl @mjg59 @ignaloidas

                            And they will give entirely plausible explanations. Occasionally, by coincidence, they will be correct.

                            mnl@hachyderm.ioM 1 Reply Last reply
                            0
                            • david_chisnall@infosec.exchangeD david_chisnall@infosec.exchange

                              @mnl @mjg59 @ignaloidas

                              And they will give entirely plausible explanations. Occasionally, by coincidence, they will be correct.

                              mnl@hachyderm.ioM This user is from outside of this forum
                              mnl@hachyderm.ioM This user is from outside of this forum
                              mnl@hachyderm.io
                              wrote sidst redigeret af
                              #89

                              @david_chisnall @mjg59 @ignaloidas just like humans! Or books!

                              david_chisnall@infosec.exchangeD ced@mapstodon.spaceC 2 Replies Last reply
                              0
                              • mnl@hachyderm.ioM mnl@hachyderm.io

                                @david_chisnall @mjg59 @ignaloidas just like humans! Or books!

                                david_chisnall@infosec.exchangeD This user is from outside of this forum
                                david_chisnall@infosec.exchangeD This user is from outside of this forum
                                david_chisnall@infosec.exchange
                                wrote sidst redigeret af
                                #90

                                @mnl @mjg59 @ignaloidas

                                Not even close. Humans build mental models of things and, if correct in one area, are likely to be correct in adjacent ones. And, in most cases, are able to say ‘I don’t know” when they don’t know the answer. Books (at least, those from reputable publishers) are reviewed by technical reviewers who spot factual errors, and have finite contents and so will simply not contain an answer if it is not something the author thought to write.

                                LLMs will interpolate over an n-dimensional latent space to provide a convincing answer. That answer may, if those bits of the latent space were well populated by things in the training set, be correct. But there is no difference from an LLM’s perspective between a correct and incorrect answer, only a likely and unlikely one.

                                mnl@hachyderm.ioM 1 Reply Last reply
                                0
                                • mjg59@nondeterministic.computerM mjg59@nondeterministic.computer

                                  Clearly my most unpopular thread ever, so let me add a clarification: submitting LLM generated code you don't understand to an upstream project is absolute bullshit and you should never do that. Having an LLM turn an existing codebase into something that meets your local needs? Do it. The code may be awful, it may break stuff you don't care about, and that's what all my early patches to free software looked like. It's ok to solve your problem locally.

                                  petko@social.petko.meP This user is from outside of this forum
                                  petko@social.petko.meP This user is from outside of this forum
                                  petko@social.petko.me
                                  wrote sidst redigeret af
                                  #91

                                  @mjg59 you might be missing a few of people's issues with LLMs. Our programmer standpoint is quite niche.

                                  What happens to people with jobs that are affected by LLMs? They either start using LLMs to match the competition's performance, or get obsoleted... What if they can't actually afford using LLMs to stay competitive?...

                                  And then there's art.

                                  On top of all of that LLMs are energy and resource-hungry, ruining the environment and making everything more expensive...

                                  petko@social.petko.meP mjg59@nondeterministic.computerM 3 Replies Last reply
                                  0
                                  • david_chisnall@infosec.exchangeD david_chisnall@infosec.exchange

                                    @mnl @mjg59 @ignaloidas

                                    Not even close. Humans build mental models of things and, if correct in one area, are likely to be correct in adjacent ones. And, in most cases, are able to say ‘I don’t know” when they don’t know the answer. Books (at least, those from reputable publishers) are reviewed by technical reviewers who spot factual errors, and have finite contents and so will simply not contain an answer if it is not something the author thought to write.

                                    LLMs will interpolate over an n-dimensional latent space to provide a convincing answer. That answer may, if those bits of the latent space were well populated by things in the training set, be correct. But there is no difference from an LLM’s perspective between a correct and incorrect answer, only a likely and unlikely one.

                                    mnl@hachyderm.ioM This user is from outside of this forum
                                    mnl@hachyderm.ioM This user is from outside of this forum
                                    mnl@hachyderm.io
                                    wrote sidst redigeret af
                                    #92

                                    @david_chisnall @mjg59 @ignaloidas I have encountered plenty of people and books that were wrong, so I still have to engage my brain and double check, though.

                                    newhinton@troet.cafeN 1 Reply Last reply
                                    0
                                    • mjg59@nondeterministic.computerM mjg59@nondeterministic.computer

                                      When I write code I am turning a creative idea into a mechanical embodiment of that idea. I am not creating beauty. Every line of code I write is a copy of another line of code I've read somewhere before, lightly modified to meet my needs. My code is not intended to evoke emotion. It does not change people think about the world. The idea→code pipeline in my head is not obviously distinguishable from the prompt->code process in an LLM

                                      ichthyx@piaille.frI This user is from outside of this forum
                                      ichthyx@piaille.frI This user is from outside of this forum
                                      ichthyx@piaille.fr
                                      wrote sidst redigeret af
                                      #93

                                      @mjg59 Funny one, but you forgot the most important of code. It's a tool for human understanding. Statistics can *probably* find some common pattern, but it have nothing to do with "understanding".

                                      1 Reply Last reply
                                      0
                                      • chris_evelyn@fedi.chris-evelyn.deC chris_evelyn@fedi.chris-evelyn.de

                                        @mjg59 Yeah, as soon as there‘s an ethically sourced and trained free LLM that‘s not controlled by very shitty companies I‘m totally on board with you.

                                        Until then we shouldn’t let that shit near our projects.

                                        troed@swecyb.comT This user is from outside of this forum
                                        troed@swecyb.comT This user is from outside of this forum
                                        troed@swecyb.com
                                        wrote sidst redigeret af
                                        #94

                                        @chris_evelyn

                                        It's my belief that Mistral's models fit that bill.

                                        @mjg59

                                        chris_evelyn@fedi.chris-evelyn.deC zacchiro@mastodon.xyzZ 2 Replies Last reply
                                        0
                                        • mjg59@nondeterministic.computerM mjg59@nondeterministic.computer

                                          Free software people: A major goal of free software is for individuals to be able to cause software to behave in the way they want it to
                                          LLMs: (enable that)
                                          Free software people: Oh no not like that

                                          strm@fedi.inclementaviary.ukS This user is from outside of this forum
                                          strm@fedi.inclementaviary.ukS This user is from outside of this forum
                                          strm@fedi.inclementaviary.uk
                                          wrote sidst redigeret af
                                          #95
                                          @mjg59
                                          I can't help but feel this leads to short-term decision making.

                                          On the one hand I get it, people have shit to do and don't want to fight with upstream projects to get their needs met. Software dev culture can be a warzone.

                                          On the other, I see this as creating a bunch of fragile siloed work, everyone solving their own immediate needs in the short term rather than working together to build a more robust long-term solution for most needs. No assumptions challenged in their approach or potential improvements to their workflow, just a "yes boss" and something that may work in the now.

                                          It feels like the seeds of an increasingly insular world, "got mine jack" culture.
                                          mjg59@nondeterministic.computerM 1 Reply Last reply
                                          0
                                          Svar
                                          • Svar som emne
                                          Login for at svare
                                          • Ældste til nyeste
                                          • Nyeste til ældste
                                          • Most Votes


                                          • Log ind

                                          • Har du ikke en konto? Tilmeld

                                          • Login or register to search.
                                          Powered by NodeBB Contributors
                                          Graciously hosted by data.coop
                                          • First post
                                            Last post
                                          0
                                          • Hjem
                                          • Seneste
                                          • Etiketter
                                          • Populære
                                          • Verden
                                          • Bruger
                                          • Grupper