v1v2 (latest)
Watermark Smoothing Attacks against Language Models
- WaLM
Main:12 Pages
9 Figures
Bibliography:4 Pages
11 Tables
Appendix:20 Pages
Abstract
Watermarking is a key technique for detecting AI-generated text. In this work, we study its vulnerabilities and introduce the Smoothing Attack, a novel watermark removal method. By leveraging the relationship between the model's confidence and watermark detectability, our attack selectively smoothes the watermarked content, erasing watermark traces while preserving text quality. We validate our attack on open-source models ranging from B to B parameters on different watermarks, demonstrating its effectiveness. Our findings expose critical weaknesses in existing watermarking schemes and highlight the need for stronger defenses.
View on arXivComments on this paper
