By now you've all probably heard about the latest shenanigans from Google and their love for in-browser AI features (if you don't, this is the story: https://www.theverge.com/tech/924933/google-chrome-4gb-gemini-nano-ai-features).
-
By now you've all probably heard about the latest shenanigans from Google and their love for in-browser AI features (if you don't, this is the story: https://www.theverge.com/tech/924933/google-chrome-4gb-gemini-nano-ai-features).
Our team has been inspecting the Chromium code and disabling stuff from the very first version of Vivaldi (we have some posts about this in our blog, such as https://vivaldi.com/blog/news/alert-no-google-topics-in-vivaldi/ or https://vivaldi.com/blog/no-google-vivaldi-users-will-not-get-floced/).
We've also been very outspoken about our dislike of the built-in AI trend in the browser industry, but in case there's still any doubts: yes, we disable all Gemini-related features, and we've been doing it for a while.
@Vivaldi Thank you!
-
@lazza @Vivaldi There's no way this stuff should be a first class (mis)feature in the browser, even optionally.
Put it in an optional extension like it always should have been, only present if you install it intentionally.
"Always installed but off by default" has no user assurance that it's actually off and not suddenly going to get turned on somehow.
@dalias @lazza @Vivaldi well, i think the reason it's in the browser itself is because a) these files are, as mentioned, massive, so you don't want to have each site store their own, and b) i don't know if the WebGPU APIs are there yet for doing LLM inference at comparable speed
i'm not opposed to the APIs in principle - LLM technology is simply not going away, and there are actually decent use cases for them, and I oppose the current status quo of just shipping it all to OpenAI or Anthropic's cloud server
My biggest concern is that no two LLM models will ever behave in the same way as each other, so sites & users that expect Google's Gemini model, wouldn't have the same experience as if say Safari had this with one of their on device models. And maybe by some pure miracle we could convince all the implementations to standardise on one model (not happening) - you can't ever update that model as newer ones are developed without breaking those expectations (also why the extension model wouldn't really work)
-
B bogwitch@social.data.coop shared this topic
-
@kimcrawley @lazza @Vivaldi Indeed, but my point was that if bad people want to make this shit, they can put it in something under their control that uses an existing interface boundary, rather than expecting us to accommodate their wish to put it in a special privileged place.
Yes, it should be illegal too.
-
@dalias @lazza @Vivaldi well, i think the reason it's in the browser itself is because a) these files are, as mentioned, massive, so you don't want to have each site store their own, and b) i don't know if the WebGPU APIs are there yet for doing LLM inference at comparable speed
i'm not opposed to the APIs in principle - LLM technology is simply not going away, and there are actually decent use cases for them, and I oppose the current status quo of just shipping it all to OpenAI or Anthropic's cloud server
My biggest concern is that no two LLM models will ever behave in the same way as each other, so sites & users that expect Google's Gemini model, wouldn't have the same experience as if say Safari had this with one of their on device models. And maybe by some pure miracle we could convince all the implementations to standardise on one model (not happening) - you can't ever update that model as newer ones are developed without breaking those expectations (also why the extension model wouldn't really work)
-
@lazza @Vivaldi There's no way this stuff should be a first class (mis)feature in the browser, even optionally.
Put it in an optional extension like it always should have been, only present if you install it intentionally.
"Always installed but off by default" has no user assurance that it's actually off and not suddenly going to get turned on somehow.
-
@kimcrawley @lazza @Vivaldi Indeed, but my point was that if bad people want to make this shit, they can put it in something under their control that uses an existing interface boundary, rather than expecting us to accommodate their wish to put it in a special privileged place.
Yes, it should be illegal too.
-
@lazza @Vivaldi Yes I do. And that does not help. Vivaldi or any respectable party should have absolutely no part in shipping/enabling this stuff.
If you want to install it, it should be a third-party extension provided by the slop provider, and subject to the same access controls all extensions are subject to.
-
@dalias Well, DFIR for law enforcement is definitely suspicious work.
Your only arguments are insults so it gives a clear definition of yourself.
I work for private clients by the way, not for law enforcement. Maybe try to learn what the word "consultant" means.
-
Your only arguments are insults so it gives a clear definition of yourself.
I work for private clients by the way, not for law enforcement. Maybe try to learn what the word "consultant" means.
@lazza @kimcrawley What do you expect when you show up in someone's mentions advocating for the "AI" industry's interests?
-
@dalias @lazza @Vivaldi okay - bit harsh, I do not _like_ the fact that AI technology exists in the form that it is today, yknow, i'm a software developer who got laid off and is actively struggling to find work, in large part due to proliferation of LLM code generation tools - so even I was a lot more receptive to AI technology, I'd still think it'd be hard to be a "slop apologist", but my view is that the cat is out of the bag. This technology _WILL_ continue to be developed, and yes, we SHOULD fight those who seek to do the "permanent underclass" bullshit, I think that's a no brainer, and I don't disagree that given the pushback we are seeing a welcome pull away from AI technologies, I think it is nothing more than wishful thinking to expect that we will see a complete wipeout of LLM usage
-
@dalias @lazza @Vivaldi okay - bit harsh, I do not _like_ the fact that AI technology exists in the form that it is today, yknow, i'm a software developer who got laid off and is actively struggling to find work, in large part due to proliferation of LLM code generation tools - so even I was a lot more receptive to AI technology, I'd still think it'd be hard to be a "slop apologist", but my view is that the cat is out of the bag. This technology _WILL_ continue to be developed, and yes, we SHOULD fight those who seek to do the "permanent underclass" bullshit, I think that's a no brainer, and I don't disagree that given the pushback we are seeing a welcome pull away from AI technologies, I think it is nothing more than wishful thinking to expect that we will see a complete wipeout of LLM usage
@tay @lazza @Vivaldi Surely there will be some people who use it. We can't eliminate them. But there is absolutely no place for it in our browsers, in software we use, etc. much less giving websites we visit backdoors to our data and interactions via some "AI API".
Once the bubble finishes imploding (it's well along the way already), there will not be new gigantic models. The astronomical costs don't justify it. They don't even justify continuing to offer the existing ones at affordable prices. The existing public models you can run client-side will of course still exist but will be increastingly outdated. This will not be a complete wipe-out, but it will be close.
-
@dalias @lazza @Vivaldi okay - bit harsh, I do not _like_ the fact that AI technology exists in the form that it is today, yknow, i'm a software developer who got laid off and is actively struggling to find work, in large part due to proliferation of LLM code generation tools - so even I was a lot more receptive to AI technology, I'd still think it'd be hard to be a "slop apologist", but my view is that the cat is out of the bag. This technology _WILL_ continue to be developed, and yes, we SHOULD fight those who seek to do the "permanent underclass" bullshit, I think that's a no brainer, and I don't disagree that given the pushback we are seeing a welcome pull away from AI technologies, I think it is nothing more than wishful thinking to expect that we will see a complete wipeout of LLM usage
-
@Teratogenese @tay @lazza @Vivaldi And asbestos was actually very useful. Just a poor hazard/benefit tradeoff.
The slop extruders aren't even useful except for doing evil things like scams, spam, and disinformation at scale.
-
@dalias @lazza @Vivaldi okay - bit harsh, I do not _like_ the fact that AI technology exists in the form that it is today, yknow, i'm a software developer who got laid off and is actively struggling to find work, in large part due to proliferation of LLM code generation tools - so even I was a lot more receptive to AI technology, I'd still think it'd be hard to be a "slop apologist", but my view is that the cat is out of the bag. This technology _WILL_ continue to be developed, and yes, we SHOULD fight those who seek to do the "permanent underclass" bullshit, I think that's a no brainer, and I don't disagree that given the pushback we are seeing a welcome pull away from AI technologies, I think it is nothing more than wishful thinking to expect that we will see a complete wipeout of LLM usage
@tay @dalias @lazza @Vivaldi just no.
This is a dead-end technology with no future.
We have known this for over a decade. It used to be called 'expert systems' and similar. Go look up IBM Watson. And that was done by far smarter people, manually training a targeted dataset with people who were experts in the field.It is not a technology. It is a waste of resources to do a bad implementation of a chatbot from the 1970's so a bunch of sociopathic techbros can siphon money for themselves.
-
By now you've all probably heard about the latest shenanigans from Google and their love for in-browser AI features (if you don't, this is the story: https://www.theverge.com/tech/924933/google-chrome-4gb-gemini-nano-ai-features).
Our team has been inspecting the Chromium code and disabling stuff from the very first version of Vivaldi (we have some posts about this in our blog, such as https://vivaldi.com/blog/news/alert-no-google-topics-in-vivaldi/ or https://vivaldi.com/blog/no-google-vivaldi-users-will-not-get-floced/).
We've also been very outspoken about our dislike of the built-in AI trend in the browser industry, but in case there's still any doubts: yes, we disable all Gemini-related features, and we've been doing it for a while.
@Vivaldi I never thought id see the day where I'm glad that a developer ISNT adding features lmao
-
By now you've all probably heard about the latest shenanigans from Google and their love for in-browser AI features (if you don't, this is the story: https://www.theverge.com/tech/924933/google-chrome-4gb-gemini-nano-ai-features).
Our team has been inspecting the Chromium code and disabling stuff from the very first version of Vivaldi (we have some posts about this in our blog, such as https://vivaldi.com/blog/news/alert-no-google-topics-in-vivaldi/ or https://vivaldi.com/blog/no-google-vivaldi-users-will-not-get-floced/).
We've also been very outspoken about our dislike of the built-in AI trend in the browser industry, but in case there's still any doubts: yes, we disable all Gemini-related features, and we've been doing it for a while.
@Vivaldi I'd long ago quit using Chrome, had installed several other options, and moved entirely to Firefox a while ago. I did download Vivaldi, Brave, and some others, LibreWolf included. I do not trust anything google any longer. Not that Vivaldi has anything to do with them, but I'm hedging my bets by removing anything Chromium from my system. Even IF Linux is far safer than other OS's.
-
@Vivaldi will you consider making it optional rather than fully removing it? Like an opt-in feature?
I know Vivaldi is very friendly when it comes to user choice.
-
-
J jwcph@helvede.net shared this topic