I propose a navigation plugin for the AI slop era.
-
I propose a navigation plugin for the AI slop era. Tool should help us find LLM-free sites based on Web of Trust crowd verification. Browsing the web, you label a site as either free from or polluted by LLM slop. One button in your browser toolbar with a keyboard shortcut. An icon indicates how the community assesses the page. Let's rid ourselves of the constant suspicion that all we read is AI-generated bullshit and put the collective intelligence to work.
Could someone build this?
Add-on: Integration of the project to search engines. So I could activate it in #Kagi or whatever I use and not see all the sites that get a score below a certain threshold.
-
Add-on: Integration of the project to search engines. So I could activate it in #Kagi or whatever I use and not see all the sites that get a score below a certain threshold.
Extra feature: Possiblity to comment or add documentation for why you think a page needs to be labelled either free from or polluted by LLM slop. A moderator team could go through comments like that and give them stronger influence on the score or simply make a decision to lock-in the site to some score if the evidence is strong enough.
-
I propose a navigation plugin for the AI slop era. Tool should help us find LLM-free sites based on Web of Trust crowd verification. Browsing the web, you label a site as either free from or polluted by LLM slop. One button in your browser toolbar with a keyboard shortcut. An icon indicates how the community assesses the page. Let's rid ourselves of the constant suspicion that all we read is AI-generated bullshit and put the collective intelligence to work.
Could someone build this?
@malte [cue hundreds of bots plugging into the extension and simulating the keypress on every single website]
-
Add-on: Integration of the project to search engines. So I could activate it in #Kagi or whatever I use and not see all the sites that get a score below a certain threshold.
@malte there's an extension called uBlacklist that lets you hide domains from search results, and people can curate and share lists with others:
https://addons.mozilla.org/en-US/firefox/addon/ublacklist/for example:
https://github.com/laylavish/uBlockOrigin-HUGE-AI-Blocklist -
M malte@radikal.social shared this topic
-
@malte [cue hundreds of bots plugging into the extension and simulating the keypress on every single website]
@noodlejetski Good catch. Web of trust means we need some kind of P2P assessment of users too, so that users that tend to assess sites correctly, also influence the score better and bots that are trying to manipulate the score gets very little influence.
-
I propose a navigation plugin for the AI slop era. Tool should help us find LLM-free sites based on Web of Trust crowd verification. Browsing the web, you label a site as either free from or polluted by LLM slop. One button in your browser toolbar with a keyboard shortcut. An icon indicates how the community assesses the page. Let's rid ourselves of the constant suspicion that all we read is AI-generated bullshit and put the collective intelligence to work.
Could someone build this?
@malte I like the idea in theory—we need something to help filter out the garbage. But this could go sideways fast.
A friend of mine does surreal art and has for years. Since the AI explosion, people keep accusing him of using AI. His actual art. That he's been making since before DALL-E was even a thing. It just happens to look weird and dreamlike, so now everyone's suspicious.
That's what worries me about crowd-sourcing this. People would flag anything that feels AI-ish, even if it's just someone with an unusual style. Experimental writers, non-native speakers, artists doing anything unconventional—they'd all get caught in the crossfire. And once you're flagged, good luck shaking that off.
Plus "AI slop" vs "content I personally don't like" is going to blur together real fast. Mob mentality isn't exactly known for nuance.
-
I propose a navigation plugin for the AI slop era. Tool should help us find LLM-free sites based on Web of Trust crowd verification. Browsing the web, you label a site as either free from or polluted by LLM slop. One button in your browser toolbar with a keyboard shortcut. An icon indicates how the community assesses the page. Let's rid ourselves of the constant suspicion that all we read is AI-generated bullshit and put the collective intelligence to work.
Could someone build this?
@malte
I had similar ideas, but how do you prevent someone from signing up a bunch of bots into the community to sway the votes on sites up or down? -
I propose a navigation plugin for the AI slop era. Tool should help us find LLM-free sites based on Web of Trust crowd verification. Browsing the web, you label a site as either free from or polluted by LLM slop. One button in your browser toolbar with a keyboard shortcut. An icon indicates how the community assesses the page. Let's rid ourselves of the constant suspicion that all we read is AI-generated bullshit and put the collective intelligence to work.
Could someone build this?
@malte what browser do you use? I can start writing a plugin
-
I propose a navigation plugin for the AI slop era. Tool should help us find LLM-free sites based on Web of Trust crowd verification. Browsing the web, you label a site as either free from or polluted by LLM slop. One button in your browser toolbar with a keyboard shortcut. An icon indicates how the community assesses the page. Let's rid ourselves of the constant suspicion that all we read is AI-generated bullshit and put the collective intelligence to work.
Could someone build this?
@malte I’m assuming you don’t want it to wait for some review process, but rather collect live feedback.
This might be a legitimate use case for a “just throw a blockchain-ish thing at it” solution.
Basically, a distributed event log of pseudonymized claims that can be synchronized on a whim. Can be additionally weighted locally based on private reactions to resulting classifications.
Abusable, but could be resistant to hostile takeover.
PS: I’m probably wrong!
-
I propose a navigation plugin for the AI slop era. Tool should help us find LLM-free sites based on Web of Trust crowd verification. Browsing the web, you label a site as either free from or polluted by LLM slop. One button in your browser toolbar with a keyboard shortcut. An icon indicates how the community assesses the page. Let's rid ourselves of the constant suspicion that all we read is AI-generated bullshit and put the collective intelligence to work.
Could someone build this?
@malte
Good idea.
Until someone does build it https://kottke.org/25/07/the-kottkeorg-rolodex provides links to some good sites written by real people -
I propose a navigation plugin for the AI slop era. Tool should help us find LLM-free sites based on Web of Trust crowd verification. Browsing the web, you label a site as either free from or polluted by LLM slop. One button in your browser toolbar with a keyboard shortcut. An icon indicates how the community assesses the page. Let's rid ourselves of the constant suspicion that all we read is AI-generated bullshit and put the collective intelligence to work.
Could someone build this?
@malte Reminds me of https://shinigami-eyes.github.io/
-
@malte
Good idea.
Until someone does build it https://kottke.org/25/07/the-kottkeorg-rolodex provides links to some good sites written by real people@curiouscat Yes, it's the federated version of those lists in a way
-
@malte what browser do you use? I can start writing a plugin
@diffdude Awesome. I'm one of those who's on Firefox.
-
@malte I like the idea in theory—we need something to help filter out the garbage. But this could go sideways fast.
A friend of mine does surreal art and has for years. Since the AI explosion, people keep accusing him of using AI. His actual art. That he's been making since before DALL-E was even a thing. It just happens to look weird and dreamlike, so now everyone's suspicious.
That's what worries me about crowd-sourcing this. People would flag anything that feels AI-ish, even if it's just someone with an unusual style. Experimental writers, non-native speakers, artists doing anything unconventional—they'd all get caught in the crossfire. And once you're flagged, good luck shaking that off.
Plus "AI slop" vs "content I personally don't like" is going to blur together real fast. Mob mentality isn't exactly known for nuance.
@giuda_ballerino I join your concern. Did you see my extra comment? I think your friend should be able to add some evidence or documentation that their work is indeed not LLM-generated. The very problem, that we constantly suspect this, is what we want to find solutions to. https://radikal.social/@malte/115400418825934736
-
@malte
I had similar ideas, but how do you prevent someone from signing up a bunch of bots into the community to sway the votes on sites up or down?@swope In another project I've benefited a lot from, some users have higher influence on the score based on previous ability to judge correctly. In such a scenario, those bots would lose their influence. I was hoping someone could describe this better. It has to be a WEB of trust, so we use some kind of discrimination mechanism on users too.