Pleased to share a page and explainer for the AI tarpit project Science is Poetry, with legal statement, rationale(s), and a few deployment notes:
-
Do you have an unused domain that you would be happy to donate to a counter-offensive against unchecked & unregulated AI crawlers that scrape human-made content to simulate & deceive for profit?
If so, pls reply to this post. Your domain would become an entrypoint to the AI tarpit & Poison-as-a-Service project below, allowing concerned public to choose to use it on their sites, helping make the project more resilient to blacklisting.
@JulianOliver looking through the replies I'll happily point a subdomain to the provided A-Record my old e-mail address domain is just barely used for the few accounts which don't allow me to change the address.
-
Do you have an unused domain that you would be happy to donate to a counter-offensive against unchecked & unregulated AI crawlers that scrape human-made content to simulate & deceive for profit?
If so, pls reply to this post. Your domain would become an entrypoint to the AI tarpit & Poison-as-a-Service project below, allowing concerned public to choose to use it on their sites, helping make the project more resilient to blacklisting.
Hi @JulianOliver,
pick subdomains of mro1.de and tell me where to point the DNS.Edit: just saw the DNS records above, I will set up some and follow up here later today.
-
Hi @JulianOliver,
pick subdomains of mro1.de and tell me where to point the DNS.Edit: just saw the DNS records above, I will set up some and follow up here later today.
-
@narthur For now they would all share the same IP.
Both domain and IP are naturally able to be filtered at the crawler end, but as numerous sites can be hosted behind one IP, it is my belief that they will drop a domain first.
Further, it's my hope to have instances of the project running on other dedicated hosts down the road.
@JulianOliver @narthur It seems to me that it would be fairly easy for them to consider that the content is junk and to treat accordingly everything coming from the same IPs. They probably keep a log of where the content used to train the LLMs is coming from (maybe with some kind of hash / pseudonymous maneer), and likely have ways to reject content from the same server IF they detect a problem from several domains linked to this server. IMO a bunch of reverse proxies / various IPs could help : they might be dumb and make it easy to polute their dataset, but probably aren't.
-
Do you have an unused domain that you would be happy to donate to a counter-offensive against unchecked & unregulated AI crawlers that scrape human-made content to simulate & deceive for profit?
If so, pls reply to this post. Your domain would become an entrypoint to the AI tarpit & Poison-as-a-Service project below, allowing concerned public to choose to use it on their sites, helping make the project more resilient to blacklisting.
@JulianOliver i have some unused. I'll let them point at A/AAAA you already mentioned. Do you need to know the exact domains?
-
@JulianOliver i have some unused. I'll let them point at A/AAAA you already mentioned. Do you need to know the exact domains?
@themadhatter Thank you! Yes knowing the domains would be needed. If you don't want the world to know they are yours, you can DM.
-
@themadhatter Thank you! Yes knowing the domains would be needed. If you don't want the world to know they are yours, you can DM.
@JulianOliver will send you a send you a DM when ready.
-
Do you have an unused domain that you would be happy to donate to a counter-offensive against unchecked & unregulated AI crawlers that scrape human-made content to simulate & deceive for profit?
If so, pls reply to this post. Your domain would become an entrypoint to the AI tarpit & Poison-as-a-Service project below, allowing concerned public to choose to use it on their sites, helping make the project more resilient to blacklisting.
@JulianOliver I set one .fi domain for this which was unused I was supposed to delete it already.
-
A bit over half a million page reads a day by crawlers rn. Just to say the server is doing some good work.
Thanks all for the fine domains! I've decided to spin up a new VM and do all the site configs and TLS chain for them at once - more efficient, less prone to error. I will get onto that on my tomorrow and report back here.
-
Pleased to share a page and explainer for the AI tarpit project Science is Poetry, with legal statement, rationale(s), and a few deployment notes:
https://julianoliver.com/projects/science-is-poetry/
The page may grow a bit. Just wanted to get it out the door.
@JulianOliver also
the web design. gotta serve the bots something nice. -
Hi @JulianOliver,
carrot.mro1.de and
car.rot.mro1.de and
ca.rr.ot.mro1.deThe pleasure is all mine.
-
Hi @JulianOliver,
carrot.mro1.de and
car.rot.mro1.de and
ca.rr.ot.mro1.deThe pleasure is all mine.
@mro Amazing, thank you! I'll have an update tomorrow once all setup.
-
Do you have an unused domain that you would be happy to donate to a counter-offensive against unchecked & unregulated AI crawlers that scrape human-made content to simulate & deceive for profit?
If so, pls reply to this post. Your domain would become an entrypoint to the AI tarpit & Poison-as-a-Service project below, allowing concerned public to choose to use it on their sites, helping make the project more resilient to blacklisting.
@JulianOliver sign me up
-
@JulianOliver I set one .fi domain for this which was unused I was supposed to delete it already.
@korkeala Thank you! Please share via DM so I can add it to the list.
-
@JulianOliver @narthur It seems to me that it would be fairly easy for them to consider that the content is junk and to treat accordingly everything coming from the same IPs. They probably keep a log of where the content used to train the LLMs is coming from (maybe with some kind of hash / pseudonymous maneer), and likely have ways to reject content from the same server IF they detect a problem from several domains linked to this server. IMO a bunch of reverse proxies / various IPs could help : they might be dumb and make it easy to polute their dataset, but probably aren't.
I am looking to populate the tarpit to other hosts, but for now the bots just keep chewing, and have been for days at one end-point.
I suspect there are so many crawlers spawned, and that they have so much in the way of resources at hand to do this scraping, that it is largely automated with little oversight.
-
Thanks all for the fine domains! I've decided to spin up a new VM and do all the site configs and TLS chain for them at once - more efficient, less prone to error. I will get onto that on my tomorrow and report back here.
Thanks for all the domain donations, a beautiful thing!
Listed on the landing page:
And copied into this post:
https://carrot.mro1.de
https://car.rot.mro1.de
https://ca.rr.ot.mro1.de
https://sygrovelaw.co.nz
https://wholesaletechnology.co.nz
https://goldenageproductions.co.nz
https://kginno.eu
https://outgoing.nz
https://unbreak.nz
https://poetry.rainskit.com
https://poetry.narthur.com
https://madhattercorp.com
https://sustainable-collective.org
https://sustainable-collective.de
https://c0-cloud.de -
Thanks for all the domain donations, a beautiful thing!
Listed on the landing page:
And copied into this post:
https://carrot.mro1.de
https://car.rot.mro1.de
https://ca.rr.ot.mro1.de
https://sygrovelaw.co.nz
https://wholesaletechnology.co.nz
https://goldenageproductions.co.nz
https://kginno.eu
https://outgoing.nz
https://unbreak.nz
https://poetry.rainskit.com
https://poetry.narthur.com
https://madhattercorp.com
https://sustainable-collective.org
https://sustainable-collective.de
https://c0-cloud.deI have only linked them here and on the landing page, and already it's gone nuts.
These are *solely* the new domains you've donated, all in one log. These do not pertain to the project domain.
-
I have only linked them here and on the landing page, and already it's gone nuts.
These are *solely* the new domains you've donated, all in one log. These do not pertain to the project domain.
I've started to harvest a list of AI crawler endpoint addrs for your blacklisting pleasure.
I'll try to keep it updated. I've been fastidious with ensuring I'm only pulling those related to the known user agent, so as not to have any false positives
https://scienceispoetry.net/files/parasites.txt
It is at the same path for all contributed domains.
For instance:
-
Pleased to share a page and explainer for the AI tarpit project Science is Poetry, with legal statement, rationale(s), and a few deployment notes:
https://julianoliver.com/projects/science-is-poetry/
The page may grow a bit. Just wanted to get it out the door.
@JulianOliver this is the coolest thing I've seen all year, thank you for sharing and making this

-
I have only linked them here and on the landing page, and already it's gone nuts.
These are *solely* the new domains you've donated, all in one log. These do not pertain to the project domain.
@JulianOliver I think scraper bots and other parasites constantly scan TLS transparency reports to find new domains to probe. As soon as you have a new certificate, they start hitting your web server.