Skip to content
  • Hjem
  • Seneste
  • Etiketter
  • Populære
  • Verden
  • Bruger
  • Grupper
Temaer
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Kollaps
FARVEL BIG TECH
ignaloidas@not.acu.ltI

ignaloidas@not.acu.lt

@ignaloidas@not.acu.lt
About
Indlæg
9
Emner
0
Fremhævelser
0
Grupper
0
Følgere
0
Følger
0

Vis Original

Indlæg

Seneste Bedste Controversial

  • Free software people: A major goal of free software is for individuals to be able to cause software to behave in the way they want it toLLMs: (enable that)Free software people: Oh no not like that
    ignaloidas@not.acu.ltI ignaloidas@not.acu.lt

    @mnl@hachyderm.io @mjg59@nondeterministic.computer @david_chisnall@infosec.exchange @engideer@tech.lgbt the fact that something is random does not mean that it has a uniform distribution. "controlled randomness" is still randomness. Taking random points in a unit circle by taking two random numbers for distance and direction will not result in a uniform distribution, but it's still random.

    like, do you even read what you're writing? I'm starting to understand why you don't trust the code you wrote

    Ikke-kategoriseret

  • Free software people: A major goal of free software is for individuals to be able to cause software to behave in the way they want it toLLMs: (enable that)Free software people: Oh no not like that
    ignaloidas@not.acu.ltI ignaloidas@not.acu.lt

    @mnl@hachyderm.io @mjg59@nondeterministic.computer @david_chisnall@infosec.exchange @newhinton@troet.cafe you are falling down the cryptocurrency fallacy, assuming that you cannot trust anyone and as such have to build stuff assuming everyone is looking to get one over you.

    This is tiresome, and I do not care to discuss with you on this any longer, if you cannot understand that there are levels between "no trust" and "absolute trust", there is nothing more to discuss.

    Ikke-kategoriseret

  • Free software people: A major goal of free software is for individuals to be able to cause software to behave in the way they want it toLLMs: (enable that)Free software people: Oh no not like that
    ignaloidas@not.acu.ltI ignaloidas@not.acu.lt

    @mnl@hachyderm.io @engideer@tech.lgbt @david_chisnall@infosec.exchange @mjg59@nondeterministic.computer LLMs are very much a random number generators. The distribution is far, far from uniform, but the whole breakthrough of LLMs was the introduction of "temperature", quite literally random choices, to break them out of monotonous tendencies.

    Ikke-kategoriseret

  • Free software people: A major goal of free software is for individuals to be able to cause software to behave in the way they want it toLLMs: (enable that)Free software people: Oh no not like that
    ignaloidas@not.acu.ltI ignaloidas@not.acu.lt

    @mnl@hachyderm.io @mjg59@nondeterministic.computer @david_chisnall@infosec.exchange @newhinton@troet.cafe through repeated checks and knowledge that humans are consistent.


    And like, really, you don't trust your code at all? I, for example, know that the code I wrote is not going to cheat by unit tests, not going to re-implement half of the things from scratch when I'm working on a small feature, nor will it randomly delete files. After working with people for a while, I can be fairly sure that the code they've written can be trusted to the same standards. LLMs can't be trusted with these things, and in fact have been documented to do all of these things.

    It is not a blind, absolute trust, but trust within reason. The fact that I have to explain this to you is honestly embarrassing.

    Ikke-kategoriseret

  • Free software people: A major goal of free software is for individuals to be able to cause software to behave in the way they want it toLLMs: (enable that)Free software people: Oh no not like that
    ignaloidas@not.acu.ltI ignaloidas@not.acu.lt

    @mnl@hachyderm.io @mjg59@nondeterministic.computer @david_chisnall@infosec.exchange @newhinton@troet.cafe Not blindly, of course, but I build up trust relationships with people I work with. And I do trust my own code to a certain extent. I can't trust a bunch of dice. The fact that you don't trust your own code at all honestly tells me all I ever need to know about you.

    Ikke-kategoriseret

  • Free software people: A major goal of free software is for individuals to be able to cause software to behave in the way they want it toLLMs: (enable that)Free software people: Oh no not like that
    ignaloidas@not.acu.ltI ignaloidas@not.acu.lt

    @mnl@hachyderm.io @mjg59@nondeterministic.computer @david_chisnall@infosec.exchange @newhinton@troet.cafe all of that training is still continuation based because that is what the models predict. Yes, there is a bunch of research, and honestly, most of it is banging head against fundamental issues of the model, but is still being funded because LLMs are at the end of it all, quite useless if they just spit nonsense from time to time and it's indistinguishable from sensible stuff without carefully cross-checking it all.

    Tool calls are just that - tools to add stuff into the context for further prediction, but they in no way do anything to make sure that the LLM output is correct, because once again - everything is treated as a continuation after the tool call, and it's just predicting, what's the most likely thing to do, not what's the correct thing to do.

    Ikke-kategoriseret

  • Free software people: A major goal of free software is for individuals to be able to cause software to behave in the way they want it toLLMs: (enable that)Free software people: Oh no not like that
    ignaloidas@not.acu.ltI ignaloidas@not.acu.lt

    @mnl@hachyderm.io @mjg59@nondeterministic.computer @david_chisnall@infosec.exchange @newhinton@troet.cafe the training objective is not "be correct", so that's not what the models are trained on. They aren't trained on such an objective because there's no way to score it - if you had a system that could determine whether a statement was correct, then you could just use that. No, what the models are trained on are globs of existing text, targeting the continuations to be the same as the text. Notably, most(all?) LLM makers don't even care whether most of the text is "correct" (in any sense sense of the word), and "solve" it by training on some more carefully selected globs of text. And in the end, what the model itself outputs are probabilities of a specific token (not even a sentence or something) to be next. The text you get is all just dice rolls on those probabilities, again and again.

    It is a text prediction machine. A very powerful one, but it's just a prediction. It just picks whatever is likely, with no regard with what is correct

    Ikke-kategoriseret

  • Free software people: A major goal of free software is for individuals to be able to cause software to behave in the way they want it toLLMs: (enable that)Free software people: Oh no not like that
    ignaloidas@not.acu.ltI ignaloidas@not.acu.lt

    @mnl@hachyderm.io @newhinton@troet.cafe @david_chisnall@infosec.exchange @mjg59@nondeterministic.computer the difference is that you can gain trust that some author knows his stuff in a specific field and you no longer need to cross-check every single thing that they write.

    With an LLM no such trust can be developed, because fundamentally it's just rolling dice out of a modeled distribution, the fact that the LLM was right about something 9 previous times has no influence whether the next statement will be correct or wrong.

    It's these trust relationships that allow to work efficiently - cross checking everything every time is incredibly time consuming.

    Ikke-kategoriseret

  • Free software people: A major goal of free software is for individuals to be able to cause software to behave in the way they want it toLLMs: (enable that)Free software people: Oh no not like that
    ignaloidas@not.acu.ltI ignaloidas@not.acu.lt

    @mjg59@nondeterministic.computer my problem with this argument is that LLMs aren't good at modifying the software, nor are they good at creating software that's easily modifiable.


    Also, I'd note that it's less free software people, and people who are interested in quality software, and it's that interest that has driven them to free software, because most free software is too high of quality for most companies to make/buy from an economical standpoint.

    Ikke-kategoriseret
  • Log ind

  • Har du ikke en konto? Tilmeld

  • Login or register to search.
Powered by NodeBB Contributors
Graciously hosted by data.coop
  • First post
    Last post
0
  • Hjem
  • Seneste
  • Etiketter
  • Populære
  • Verden
  • Bruger
  • Grupper