Yk, the most insane thing about this whole age verification disaster is that EU countries are planning to enforce it without eIDAS even being rolled out. For better or worse, there is a way to check for age via something like the eIDAS wallet's proposed ZK knowledge proofs. But any currently viable option, including the reference wallet design, all require hardware and software by US companies (iPhone or Android with Play Integrity). Some even require US payment providers ... what a footgun.
pojntfx@mastodon.social
Indlæg
-
Yk, the most insane thing about this whole age verification disaster is that EU countries are planning to enforce it without eIDAS even being rolled out. -
Here is a sad (and somewhat pathetic, I guess) fact: The new Firefox "smart window" (which is an LLM-based browser), doesn't even use a local or open model, it's literally just Google's models run via their API@madsenandersc Huh, interesting - yeah I never really deal with languages other than French, German and English I guess, haven't really run into this. For web search, https://newelle.qsk.me/#home has been surprisingly good with a 18B model, even though it's slow.
I guess one way they could implement the whole remote server situation would be to lean on say an OpenAI-compatible API - which something like vllm, llama.cpp, SGLang and so on can provide
-
Here is a sad (and somewhat pathetic, I guess) fact: The new Firefox "smart window" (which is an LLM-based browser), doesn't even use a local or open model, it's literally just Google's models run via their API@kstrlworks Ladybird's governance issues really make it not a viable alternative in my eyes. Solid engineering, but damn I won't be working with someone who believes I shouldn't be working or even exist
-
Here is a sad (and somewhat pathetic, I guess) fact: The new Firefox "smart window" (which is an LLM-based browser), doesn't even use a local or open model, it's literally just Google's models run via their API@madsenandersc You're not wrong in a lot of ways. But I'll also say that recent advances in quantization (I'm using the GLM-4.6V model) and also the vulkan acceleration support in say llama.cpp is making a big difference. My RX4060 and AMD 890m are more than good enough to instrument a browser with a fully local LLM now.
-
Here is a sad (and somewhat pathetic, I guess) fact: The new Firefox "smart window" (which is an LLM-based browser), doesn't even use a local or open model, it's literally just Google's models run via their API@kstrlworks Servo honestly seems like the only way forward.
-
Here is a sad (and somewhat pathetic, I guess) fact: The new Firefox "smart window" (which is an LLM-based browser), doesn't even use a local or open model, it's literally just Google's models run via their API@freddy @buherator I hope there is at least an option of using a local LLM, check even GLM-4.6V is good enough for instrumenting browsers in my experience. Signing into an account (thereby sending all of my LLM context directly to my identity with Mozilla) and proxying via Mozilla infrastructure to Google (which does not anonymise since the context contains everything already) seems like a terrible direction here, seriously. Esp. given that there are lots of ways to run LLMs locally.
-
Here is a sad (and somewhat pathetic, I guess) fact: The new Firefox "smart window" (which is an LLM-based browser), doesn't even use a local or open model, it's literally just Google's models run via their APIHere is a sad (and somewhat pathetic, I guess) fact: The new Firefox "smart window" (which is an LLM-based browser), doesn't even use a local or open model, it's literally just Google's models run via their API