Pleased to share a page and explainer for the AI tarpit project Science is Poetry, with legal statement, rationale(s), and a few deployment notes:
-
Pleased to share a page and explainer for the AI tarpit project Science is Poetry, with legal statement, rationale(s), and a few deployment notes:
https://julianoliver.com/projects/science-is-poetry/
The page may grow a bit. Just wanted to get it out the door.
@JulianOliver also
the web design. gotta serve the bots something nice. -
Hi @JulianOliver,
carrot.mro1.de and
car.rot.mro1.de and
ca.rr.ot.mro1.deThe pleasure is all mine.
-
Hi @JulianOliver,
carrot.mro1.de and
car.rot.mro1.de and
ca.rr.ot.mro1.deThe pleasure is all mine.
@mro Amazing, thank you! I'll have an update tomorrow once all setup.
-
Do you have an unused domain that you would be happy to donate to a counter-offensive against unchecked & unregulated AI crawlers that scrape human-made content to simulate & deceive for profit?
If so, pls reply to this post. Your domain would become an entrypoint to the AI tarpit & Poison-as-a-Service project below, allowing concerned public to choose to use it on their sites, helping make the project more resilient to blacklisting.
@JulianOliver sign me up
-
@JulianOliver I set one .fi domain for this which was unused I was supposed to delete it already.
@korkeala Thank you! Please share via DM so I can add it to the list.
-
@JulianOliver @narthur It seems to me that it would be fairly easy for them to consider that the content is junk and to treat accordingly everything coming from the same IPs. They probably keep a log of where the content used to train the LLMs is coming from (maybe with some kind of hash / pseudonymous maneer), and likely have ways to reject content from the same server IF they detect a problem from several domains linked to this server. IMO a bunch of reverse proxies / various IPs could help : they might be dumb and make it easy to polute their dataset, but probably aren't.
I am looking to populate the tarpit to other hosts, but for now the bots just keep chewing, and have been for days at one end-point.
I suspect there are so many crawlers spawned, and that they have so much in the way of resources at hand to do this scraping, that it is largely automated with little oversight.
-
Thanks all for the fine domains! I've decided to spin up a new VM and do all the site configs and TLS chain for them at once - more efficient, less prone to error. I will get onto that on my tomorrow and report back here.
Thanks for all the domain donations, a beautiful thing!
Listed on the landing page:
And copied into this post:
https://carrot.mro1.de
https://car.rot.mro1.de
https://ca.rr.ot.mro1.de
https://sygrovelaw.co.nz
https://wholesaletechnology.co.nz
https://goldenageproductions.co.nz
https://kginno.eu
https://outgoing.nz
https://unbreak.nz
https://poetry.rainskit.com
https://poetry.narthur.com
https://madhattercorp.com
https://sustainable-collective.org
https://sustainable-collective.de
https://c0-cloud.de -
Thanks for all the domain donations, a beautiful thing!
Listed on the landing page:
And copied into this post:
https://carrot.mro1.de
https://car.rot.mro1.de
https://ca.rr.ot.mro1.de
https://sygrovelaw.co.nz
https://wholesaletechnology.co.nz
https://goldenageproductions.co.nz
https://kginno.eu
https://outgoing.nz
https://unbreak.nz
https://poetry.rainskit.com
https://poetry.narthur.com
https://madhattercorp.com
https://sustainable-collective.org
https://sustainable-collective.de
https://c0-cloud.deI have only linked them here and on the landing page, and already it's gone nuts.
These are *solely* the new domains you've donated, all in one log. These do not pertain to the project domain.
-
I have only linked them here and on the landing page, and already it's gone nuts.
These are *solely* the new domains you've donated, all in one log. These do not pertain to the project domain.
I've started to harvest a list of AI crawler endpoint addrs for your blacklisting pleasure.
I'll try to keep it updated. I've been fastidious with ensuring I'm only pulling those related to the known user agent, so as not to have any false positives
https://scienceispoetry.net/files/parasites.txt
It is at the same path for all contributed domains.
For instance:
-
Pleased to share a page and explainer for the AI tarpit project Science is Poetry, with legal statement, rationale(s), and a few deployment notes:
https://julianoliver.com/projects/science-is-poetry/
The page may grow a bit. Just wanted to get it out the door.
@JulianOliver this is the coolest thing I've seen all year, thank you for sharing and making this

-
I have only linked them here and on the landing page, and already it's gone nuts.
These are *solely* the new domains you've donated, all in one log. These do not pertain to the project domain.
@JulianOliver I think scraper bots and other parasites constantly scan TLS transparency reports to find new domains to probe. As soon as you have a new certificate, they start hitting your web server.
-
@JulianOliver I think scraper bots and other parasites constantly scan TLS transparency reports to find new domains to probe. As soon as you have a new certificate, they start hitting your web server.
@pertho Very interesting! I will look into this closely. Thank you.
-
Do you have an unused domain that you would be happy to donate to a counter-offensive against unchecked & unregulated AI crawlers that scrape human-made content to simulate & deceive for profit?
If so, pls reply to this post. Your domain would become an entrypoint to the AI tarpit & Poison-as-a-Service project below, allowing concerned public to choose to use it on their sites, helping make the project more resilient to blacklisting.
@JulianOliver heya. How do I donate and configure domains for your experiment? Do they come with any restrictions, i.e. must the FQDNs be in distinct DNS zones, etc? Can easily spin up and configure a few to begin with e.g.
poetry.zenr.io
worst.case.zenr.io
wurst.case.zenr.io
et.c. -
@JulianOliver @futuresprog ah, cool to know, thanks. can config a few from this ...
-
@JulianOliver @futuresprog ah, cool to know, thanks. can config a few from this ...
@vortex @futuresprog Thanks a lot Adam! Please let me know when you're done. By DM is also fine too.
-
@JulianOliver I think scraper bots and other parasites constantly scan TLS transparency reports to find new domains to probe. As soon as you have a new certificate, they start hitting your web server.
@pertho @JulianOliver a way around this is to use both DNS and TLS wildcard support - there is no single domain list in the report that can be slopped.
-
@pertho @JulianOliver a way around this is to use both DNS and TLS wildcard support - there is no single domain list in the report that can be slopped.
@dch I would think they would still try the bare/apex domain and "www".
@JulianOliver -
J jwcph@helvede.net shared this topic