kwazekwaze@mastodon.social
Indlæg
-
Pulling the diagram back out. -
Pulling the diagram back out.Pulling the diagram back out.
Large models are inherently theft and there is no amount of specious comparisons you can make or weaseling you can do to get around that fact
Nearly all of their value comes from the unconsented and uncompensated labor of others.
Your "niche use cases" rely on this fact.
It doesn't matter if there's no risk of output directly "infringing". The entire premise of the product is the theft of others' work.
-
This "careful" "AI Safety" company that just accidentally leaked its entire source code to the world is the one that African governments are entering into agreements with to include in infrastructures from health care to god knows what.@timnitGebru
And sorry none of this is directed at you -
This "careful" "AI Safety" company that just accidentally leaked its entire source code to the world is the one that African governments are entering into agreements with to include in infrastructures from health care to god knows what.@timnitGebru There's something especially heinous about using the word "works" like this despite knowing all of the issues and I feel like it's been litigated to death at this point and people should know better by now.
Leaded gasoline "works". Downtown freeways "work". Asbestos "works". The list goes on. It's tiresome! It's irksome! It strikes me as if this author thought the theft machine wasn't capable of reproducing the working content it stole! Yes! That's why we call it a theft machine!
-
This "careful" "AI Safety" company that just accidentally leaked its entire source code to the world is the one that African governments are entering into agreements with to include in infrastructures from health care to god knows what.@timnitGebru In a perfect world I'd accept people that love their codegen chatbots as no different from people that prefer the command line or tabs over spaces!
But we're not in that world and they're actively forcing their products on everyone else and posts like these reek of someone that has the privilege of not having that be done to them.
-
This "careful" "AI Safety" company that just accidentally leaked its entire source code to the world is the one that African governments are entering into agreements with to include in infrastructures from health care to god knows what.@timnitGebru that blogpost strikes me as incredibly irresponsible
The legalistic use of the word "works" - the post itself includes the keyphrase "works with caveats"! - and that otherwise reasonable conclusion that becomes absolutely heinous anywhere that isn't a vacuum. Suggesting people need to be more accommodating towards LLM users is a joke when this is the cohort attempting to force their (by the authors' recognition horrifically joyless to use) toys onto and into everyone else's life.