I propose a navigation plugin for the AI slop era.
-
I propose a navigation plugin for the AI slop era. Tool should help us find LLM-free sites based on Web of Trust crowd verification. Browsing the web, you label a site as either free from or polluted by LLM slop. One button in your browser toolbar with a keyboard shortcut. An icon indicates how the community assesses the page. Let's rid ourselves of the constant suspicion that all we read is AI-generated bullshit and put the collective intelligence to work.
Could someone build this?
-
I propose a navigation plugin for the AI slop era. Tool should help us find LLM-free sites based on Web of Trust crowd verification. Browsing the web, you label a site as either free from or polluted by LLM slop. One button in your browser toolbar with a keyboard shortcut. An icon indicates how the community assesses the page. Let's rid ourselves of the constant suspicion that all we read is AI-generated bullshit and put the collective intelligence to work.
Could someone build this?
Add-on: Integration of the project to search engines. So I could activate it in #Kagi or whatever I use and not see all the sites that get a score below a certain threshold.
-
Add-on: Integration of the project to search engines. So I could activate it in #Kagi or whatever I use and not see all the sites that get a score below a certain threshold.
Extra feature: Possiblity to comment or add documentation for why you think a page needs to be labelled either free from or polluted by LLM slop. A moderator team could go through comments like that and give them stronger influence on the score or simply make a decision to lock-in the site to some score if the evidence is strong enough.
-
I propose a navigation plugin for the AI slop era. Tool should help us find LLM-free sites based on Web of Trust crowd verification. Browsing the web, you label a site as either free from or polluted by LLM slop. One button in your browser toolbar with a keyboard shortcut. An icon indicates how the community assesses the page. Let's rid ourselves of the constant suspicion that all we read is AI-generated bullshit and put the collective intelligence to work.
Could someone build this?
@malte [cue hundreds of bots plugging into the extension and simulating the keypress on every single website]
-
Add-on: Integration of the project to search engines. So I could activate it in #Kagi or whatever I use and not see all the sites that get a score below a certain threshold.
@malte there's an extension called uBlacklist that lets you hide domains from search results, and people can curate and share lists with others:
https://addons.mozilla.org/en-US/firefox/addon/ublacklist/for example:
https://github.com/laylavish/uBlockOrigin-HUGE-AI-Blocklist -
M malte@radikal.social shared this topic
-
@malte [cue hundreds of bots plugging into the extension and simulating the keypress on every single website]
@noodlejetski Good catch. Web of trust means we need some kind of P2P assessment of users too, so that users that tend to assess sites correctly, also influence the score better and bots that are trying to manipulate the score gets very little influence.
-
I propose a navigation plugin for the AI slop era. Tool should help us find LLM-free sites based on Web of Trust crowd verification. Browsing the web, you label a site as either free from or polluted by LLM slop. One button in your browser toolbar with a keyboard shortcut. An icon indicates how the community assesses the page. Let's rid ourselves of the constant suspicion that all we read is AI-generated bullshit and put the collective intelligence to work.
Could someone build this?
@malte I like the idea in theory—we need something to help filter out the garbage. But this could go sideways fast.
A friend of mine does surreal art and has for years. Since the AI explosion, people keep accusing him of using AI. His actual art. That he's been making since before DALL-E was even a thing. It just happens to look weird and dreamlike, so now everyone's suspicious.
That's what worries me about crowd-sourcing this. People would flag anything that feels AI-ish, even if it's just someone with an unusual style. Experimental writers, non-native speakers, artists doing anything unconventional—they'd all get caught in the crossfire. And once you're flagged, good luck shaking that off.
Plus "AI slop" vs "content I personally don't like" is going to blur together real fast. Mob mentality isn't exactly known for nuance.
-
I propose a navigation plugin for the AI slop era. Tool should help us find LLM-free sites based on Web of Trust crowd verification. Browsing the web, you label a site as either free from or polluted by LLM slop. One button in your browser toolbar with a keyboard shortcut. An icon indicates how the community assesses the page. Let's rid ourselves of the constant suspicion that all we read is AI-generated bullshit and put the collective intelligence to work.
Could someone build this?
@malte
I had similar ideas, but how do you prevent someone from signing up a bunch of bots into the community to sway the votes on sites up or down? -
I propose a navigation plugin for the AI slop era. Tool should help us find LLM-free sites based on Web of Trust crowd verification. Browsing the web, you label a site as either free from or polluted by LLM slop. One button in your browser toolbar with a keyboard shortcut. An icon indicates how the community assesses the page. Let's rid ourselves of the constant suspicion that all we read is AI-generated bullshit and put the collective intelligence to work.
Could someone build this?
@malte what browser do you use? I can start writing a plugin
-
I propose a navigation plugin for the AI slop era. Tool should help us find LLM-free sites based on Web of Trust crowd verification. Browsing the web, you label a site as either free from or polluted by LLM slop. One button in your browser toolbar with a keyboard shortcut. An icon indicates how the community assesses the page. Let's rid ourselves of the constant suspicion that all we read is AI-generated bullshit and put the collective intelligence to work.
Could someone build this?
@malte I’m assuming you don’t want it to wait for some review process, but rather collect live feedback.
This might be a legitimate use case for a “just throw a blockchain-ish thing at it” solution.
Basically, a distributed event log of pseudonymized claims that can be synchronized on a whim. Can be additionally weighted locally based on private reactions to resulting classifications.
Abusable, but could be resistant to hostile takeover.
PS: I’m probably wrong!
-
I propose a navigation plugin for the AI slop era. Tool should help us find LLM-free sites based on Web of Trust crowd verification. Browsing the web, you label a site as either free from or polluted by LLM slop. One button in your browser toolbar with a keyboard shortcut. An icon indicates how the community assesses the page. Let's rid ourselves of the constant suspicion that all we read is AI-generated bullshit and put the collective intelligence to work.
Could someone build this?
@malte
Good idea.
Until someone does build it https://kottke.org/25/07/the-kottkeorg-rolodex provides links to some good sites written by real people -
I propose a navigation plugin for the AI slop era. Tool should help us find LLM-free sites based on Web of Trust crowd verification. Browsing the web, you label a site as either free from or polluted by LLM slop. One button in your browser toolbar with a keyboard shortcut. An icon indicates how the community assesses the page. Let's rid ourselves of the constant suspicion that all we read is AI-generated bullshit and put the collective intelligence to work.
Could someone build this?
@malte Reminds me of https://shinigami-eyes.github.io/
-
@malte
Good idea.
Until someone does build it https://kottke.org/25/07/the-kottkeorg-rolodex provides links to some good sites written by real people@curiouscat Yes, it's the federated version of those lists in a way
-
@malte what browser do you use? I can start writing a plugin
@diffdude Awesome. I'm one of those who's on Firefox.
-
@malte I like the idea in theory—we need something to help filter out the garbage. But this could go sideways fast.
A friend of mine does surreal art and has for years. Since the AI explosion, people keep accusing him of using AI. His actual art. That he's been making since before DALL-E was even a thing. It just happens to look weird and dreamlike, so now everyone's suspicious.
That's what worries me about crowd-sourcing this. People would flag anything that feels AI-ish, even if it's just someone with an unusual style. Experimental writers, non-native speakers, artists doing anything unconventional—they'd all get caught in the crossfire. And once you're flagged, good luck shaking that off.
Plus "AI slop" vs "content I personally don't like" is going to blur together real fast. Mob mentality isn't exactly known for nuance.
@giuda_ballerino I join your concern. Did you see my extra comment? I think your friend should be able to add some evidence or documentation that their work is indeed not LLM-generated. The very problem, that we constantly suspect this, is what we want to find solutions to. https://radikal.social/@malte/115400418825934736
-
@malte
I had similar ideas, but how do you prevent someone from signing up a bunch of bots into the community to sway the votes on sites up or down?@swope In another project I've benefited a lot from, some users have higher influence on the score based on previous ability to judge correctly. In such a scenario, those bots would lose their influence. I was hoping someone could describe this better. It has to be a WEB of trust, so we use some kind of discrimination mechanism on users too.
-
@giuda_ballerino I join your concern. Did you see my extra comment? I think your friend should be able to add some evidence or documentation that their work is indeed not LLM-generated. The very problem, that we constantly suspect this, is what we want to find solutions to. https://radikal.social/@malte/115400418825934736
@malte I hear you, but I'm still not sold on this for the same reasons.
You're asking people to prove their work isn't AI, but how do you even do that? And then you need moderators to decide what counts as proof? That's just a mess waiting to happen.
Maybe cryptographic signatures could work, where people sign their own stuff. Though I don't know enough about that to say if it's practical.
The "trust established sources" thing I really don't like—that just puts power back in the hands of a few big platforms and publications. Goes against the whole decentralized web thing, right?
I keep coming back to this: we probably just need to get better at reading critically ourselves. Actually learning to spot bullshit, AI or not. Tools could maybe help with that? Not telling you what's real or fake, but like... teaching you what questions to ask, what to look for. I don't know exactly what that looks like.
Honestly the whole thing is frustrating because there probably isn't a clean solution. We want the button that fixes everything but that's not how this works. Some things you just have to figure out the hard way.
-
I propose a navigation plugin for the AI slop era. Tool should help us find LLM-free sites based on Web of Trust crowd verification. Browsing the web, you label a site as either free from or polluted by LLM slop. One button in your browser toolbar with a keyboard shortcut. An icon indicates how the community assesses the page. Let's rid ourselves of the constant suspicion that all we read is AI-generated bullshit and put the collective intelligence to work.
Could someone build this?
@malte i like this idea but i worry about people seeing an em-dash or
a college-level vocabulary will automatically assume it's slop -
@diffdude Awesome. I'm one of those who's on Firefox.
@malte turns out "clickup" already has a firefox extention called ClickUp AI Filter
-
@malte turns out "clickup" already has a firefox extention called ClickUp AI Filter
@diffdude That seems like the opposite of what I'm proposing.