Scrapers selectively respect robots.txt directives: evidence from a large-scale empirical study
Online data scraping has taken on new dimensions in recent years, as traditional scrapers have been joined by new AI-specific bots. To counteract unwanted scraping, many sites use tools like the Robots Exclusion Protocol (REP), which places a robotstxt file at the site root to dictate scraper behavior. Yet, the efficacy of the REP is not well-understood. Anecdotal evidence suggests some bots comply poorly with it, but no rigorous study exists to support (or refute) this claim. To understand the merits and limits of the REP, we conduct the first large-scale study of web scraper compliance with robotstxt directives using anonymized web logs from our institution. We analyze the behavior of 130 self-declared bots (and many anonymous ones) over 40 days, using a series of controlled robotstxt experiments. We find that bots are less likely to comply with stricter robotstxt directives, and that certain categories of bots, including AI search crawlers, rarely check robotstxt at all. These findings suggest that relying on robotstxt files to prevent unwanted scraping is risky and highlight the need for alternative approaches.
View on arXiv