@pojntfx I have seen PRDs that discuss local models, but I don’t recall if they are part of the mvp. Might just be that folks can set a custom pref in about:config for now.
freddy@social.security.plumbing
@freddy@social.security.plumbing
Indlæg
-
Here is a sad (and somewhat pathetic, I guess) fact: The new Firefox "smart window" (which is an LLM-based browser), doesn't even use a local or open model, it's literally just Google's models run via their API -
Here is a sad (and somewhat pathetic, I guess) fact: The new Firefox "smart window" (which is an LLM-based browser), doesn't even use a local or open model, it's literally just Google's models run via their API@oliviablob @pojntfx @buherator I agree. If you want privacy, you probably shouldn’t use an llm hosted and controlled by someone else. I certainly wouldn’t.
-
Here is a sad (and somewhat pathetic, I guess) fact: The new Firefox "smart window" (which is an LLM-based browser), doesn't even use a local or open model, it's literally just Google's models run via their API@pojntfx @buherator not true. The feature is currently in development and uses different models while things are still under test. This is pre-release software. Behavior will change.
Currently, everything is proxied through mozilla infra. The model that will ship is (afaiu) not yet determined. -
Hot take: If we added a "--install" option to #curl, we could optimize many a "| sh -" pipeline away.@larsmb pair it with some yet-to-be-specified `integrity` parameter to check the file and we're there.