Pleased to share a page and explainer for the AI tarpit project Science is Poetry, with legal statement, rationale(s), and a few deployment notes:
-
Thanks for all the domain donations, a beautiful thing!
Listed on the landing page:
And copied into this post:
https://carrot.mro1.de
https://car.rot.mro1.de
https://ca.rr.ot.mro1.de
https://sygrovelaw.co.nz
https://wholesaletechnology.co.nz
https://goldenageproductions.co.nz
https://kginno.eu
https://outgoing.nz
https://unbreak.nz
https://poetry.rainskit.com
https://poetry.narthur.com
https://madhattercorp.com
https://sustainable-collective.org
https://sustainable-collective.de
https://c0-cloud.deI have only linked them here and on the landing page, and already it's gone nuts.
These are *solely* the new domains you've donated, all in one log. These do not pertain to the project domain.
-
I have only linked them here and on the landing page, and already it's gone nuts.
These are *solely* the new domains you've donated, all in one log. These do not pertain to the project domain.
I've started to harvest a list of AI crawler endpoint addrs for your blacklisting pleasure.
I'll try to keep it updated. I've been fastidious with ensuring I'm only pulling those related to the known user agent, so as not to have any false positives
https://scienceispoetry.net/files/parasites.txt
It is at the same path for all contributed domains.
For instance:
-
Pleased to share a page and explainer for the AI tarpit project Science is Poetry, with legal statement, rationale(s), and a few deployment notes:
https://julianoliver.com/projects/science-is-poetry/
The page may grow a bit. Just wanted to get it out the door.
@JulianOliver this is the coolest thing I've seen all year, thank you for sharing and making this

-
I have only linked them here and on the landing page, and already it's gone nuts.
These are *solely* the new domains you've donated, all in one log. These do not pertain to the project domain.
@JulianOliver I think scraper bots and other parasites constantly scan TLS transparency reports to find new domains to probe. As soon as you have a new certificate, they start hitting your web server.
-
@JulianOliver I think scraper bots and other parasites constantly scan TLS transparency reports to find new domains to probe. As soon as you have a new certificate, they start hitting your web server.
@pertho Very interesting! I will look into this closely. Thank you.
-
Do you have an unused domain that you would be happy to donate to a counter-offensive against unchecked & unregulated AI crawlers that scrape human-made content to simulate & deceive for profit?
If so, pls reply to this post. Your domain would become an entrypoint to the AI tarpit & Poison-as-a-Service project below, allowing concerned public to choose to use it on their sites, helping make the project more resilient to blacklisting.
@JulianOliver heya. How do I donate and configure domains for your experiment? Do they come with any restrictions, i.e. must the FQDNs be in distinct DNS zones, etc? Can easily spin up and configure a few to begin with e.g.
poetry.zenr.io
worst.case.zenr.io
wurst.case.zenr.io
et.c. -
@JulianOliver @futuresprog ah, cool to know, thanks. can config a few from this ...
-
@JulianOliver @futuresprog ah, cool to know, thanks. can config a few from this ...
@vortex @futuresprog Thanks a lot Adam! Please let me know when you're done. By DM is also fine too.
-
@JulianOliver I think scraper bots and other parasites constantly scan TLS transparency reports to find new domains to probe. As soon as you have a new certificate, they start hitting your web server.
@pertho @JulianOliver a way around this is to use both DNS and TLS wildcard support - there is no single domain list in the report that can be slopped.
-
@pertho @JulianOliver a way around this is to use both DNS and TLS wildcard support - there is no single domain list in the report that can be slopped.
@dch I would think they would still try the bare/apex domain and "www".
@JulianOliver -
J jwcph@helvede.net shared this topic