36
2

Watermark Smoothing Attacks against Language Models

Abstract

Watermarking is a key technique for detecting AI-generated text. In this work, we study its vulnerabilities and introduce the Smoothing Attack, a novel watermark removal method. By leveraging the relationship between the model's confidence and watermark detectability, our attack selectively smoothes the watermarked content, erasing watermark traces while preserving text quality. We validate our attack on open-source models ranging from 1.31.3B to 3030B parameters on 1010 different watermarks, demonstrating its effectiveness. Our findings expose critical weaknesses in existing watermarking schemes and highlight the need for stronger defenses.

View on arXiv
@article{chang2025_2407.14206,
  title={ Watermark Smoothing Attacks against Language Models },
  author={ Hongyan Chang and Hamed Hassani and Reza Shokri },
  journal={arXiv preprint arXiv:2407.14206},
  year={ 2025 }
}
Comments on this paper