Skip to content
  • Hjem
  • Seneste
  • Etiketter
  • Populære
  • Verden
  • Bruger
  • Grupper
Temaer
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Kollaps
FARVEL BIG TECH
dhd6@jasette.facil.servicesD

dhd6@jasette.facil.services

@dhd6@jasette.facil.services
About
Indlæg
3
Emner
0
Fremhævelser
0
Grupper
0
Følgere
0
Følger
0

Vis Original

Indlæg

Seneste Bedste Controversial

  • Yesterday Cory Doctorow argued that refusal to use LLMs was mere "neoliberal purity culture".
    dhd6@jasette.facil.servicesD dhd6@jasette.facil.services

    @pluralistic @tante @simonzerafa As always, yes and no. A bug zapper is designed to zap bugs, it is a simple mechanism that does that one thing, and does it well. An LLM is designed to read text and generate more text.

    That we have decided that the best way to do NLP is to use massively overparameterized word predictors that we have trained using RL to respond to prompts, rather than just, like, doing NLP, is just crazy from an engineering standpoint.

    Rube Goldberg is spinning in his grave!

    Ikke-kategoriseret

  • Yesterday Cory Doctorow argued that refusal to use LLMs was mere "neoliberal purity culture".
    dhd6@jasette.facil.servicesD dhd6@jasette.facil.services

    @tante @pluralistic @simonzerafa But ALSO: using a multi-billion-parameter synthetic text extruding machine to find spelling and syntax errors is a blatant example of "doing everything the least efficient way possible" and that's why we are living on an overheating planet buried under toxic e-waste.

    If I think about it harder I could probably come up with a more clever metaphor than killing a mosquito with a flamethrower, but you get the idea.

    Ikke-kategoriseret

  • Yesterday Cory Doctorow argued that refusal to use LLMs was mere "neoliberal purity culture".
    dhd6@jasette.facil.servicesD dhd6@jasette.facil.services

    @tante @pluralistic @simonzerafa I agree in principle with Cory, but I really wish that he had clarified that:

    1. Ollama is not an LLM, it's a server for various models, of varying degrees of openness.
    2. Open weights is not open source, the model is still a black box. We should support projects like OLMO, which are completely open, down to the training data set and checkpoints.
    3. It's quite difficult to "seize that technology" without using Someone Else's Computer to do so (a.k.a clown/cloud)

    Ikke-kategoriseret
  • Log ind

  • Har du ikke en konto? Tilmeld

  • Login or register to search.
Powered by NodeBB Contributors
Graciously hosted by data.coop
  • First post
    Last post
0
  • Hjem
  • Seneste
  • Etiketter
  • Populære
  • Verden
  • Bruger
  • Grupper