Scrapers selectively respect robots.txt directives: evidence from a large-scale empirical study
Online data scraping has taken on new dimensions in recent years, as traditional scrapers have been joined by new AI-specific bots. To counteract unwanted scraping, many sites use tools like the Robots Exclusion Protocol (REP), which places a this http URL file at the site root to dictate scraper behavior. Yet, the efficacy of the REP is not well-understood. Anecdotal evidence suggests some bots comply poorly with it, but no rigorous study exists to support (or refute) this claim. To understand the merits and limits of the REP, we conduct the first large-scale study of web scraper compliance with this http URL directives using anonymized web logs from our institution. We analyze the behavior of 130 self-declared bots (and many anonymous ones) over 40 days, using a series of controlled this http URL experiments. We find that bots are less likely to comply with stricter this http URL directives, and that certain categories of bots, including AI search crawlers, rarely check this http URL at all. These findings suggest that relying on this http URL files to prevent unwanted scraping is risky and highlight the need for alternative approaches.
View on arXiv