@dalias @lazza @Vivaldi okay - bit harsh, I do not _like_ the fact that AI technology exists in the form that it is today, yknow, i'm a software developer who got laid off and is actively struggling to find work, in large part due to proliferation of LLM code generation tools - so even I was a lot more receptive to AI technology, I'd still think it'd be hard to be a "slop apologist", but my view is that the cat is out of the bag. This technology _WILL_ continue to be developed, and yes, we SHOULD fight those who seek to do the "permanent underclass" bullshit, I think that's a no brainer, and I don't disagree that given the pushback we are seeing a welcome pull away from AI technologies, I think it is nothing more than wishful thinking to expect that we will see a complete wipeout of LLM usage
tay@tech.lgbt
Indlæg
-
By now you've all probably heard about the latest shenanigans from Google and their love for in-browser AI features (if you don't, this is the story: https://www.theverge.com/tech/924933/google-chrome-4gb-gemini-nano-ai-features). -
By now you've all probably heard about the latest shenanigans from Google and their love for in-browser AI features (if you don't, this is the story: https://www.theverge.com/tech/924933/google-chrome-4gb-gemini-nano-ai-features).@dalias @lazza @Vivaldi well, i think the reason it's in the browser itself is because a) these files are, as mentioned, massive, so you don't want to have each site store their own, and b) i don't know if the WebGPU APIs are there yet for doing LLM inference at comparable speed
i'm not opposed to the APIs in principle - LLM technology is simply not going away, and there are actually decent use cases for them, and I oppose the current status quo of just shipping it all to OpenAI or Anthropic's cloud server
My biggest concern is that no two LLM models will ever behave in the same way as each other, so sites & users that expect Google's Gemini model, wouldn't have the same experience as if say Safari had this with one of their on device models. And maybe by some pure miracle we could convince all the implementations to standardise on one model (not happening) - you can't ever update that model as newer ones are developed without breaking those expectations (also why the extension model wouldn't really work)