LLM watermarking has attracted attention as a promising way to detect
AI-generated content, with some works suggesting that current schemes may
already be fit for deployment. In this work we dispute this claim, identifying
watermark stealing (WS) as a fundamental vulnerability of these schemes. We
show that querying the API of the watermarked LLM to approximately
reverse-engineer a watermark enables practical spoofing attacks, as
hypothesized in prior work, but also greatly boosts scrubbing attacks, which
was previously unnoticed. We are the first to propose an automated WS algorithm
and use it in the first comprehensive study of spoofing and scrubbing in
realistic settings. We show that for under 50anattackercanbothspoofandscrubstate−of−the−artschemespreviouslyconsideredsafe,withaveragesuccessrateofover80stressingtheneedformorerobustschemes.Wemakeallourcodeandadditionalexamplesavailableathttps://watermark−stealing.org.