AI-Generated Content (AIGC) is rapidly expanding, with services using advanced generative models to create realistic images and fluent text. Regulating such content is crucial to prevent policy violations, such as unauthorized commercialization or unsafe content distribution. Watermarking is a promising solution for content attribution and verification, but we demonstrate its vulnerability to two key attacks: (1) Watermark removal, where adversaries erase embedded marks to evade regulation, and (2) Watermark forging, where they generate illicit content with forged watermarks, leading to misattribution. We propose Warfare, a unified attack framework leveraging a pre-trained diffusion model for content processing and a generative adversarial network for watermark manipulation. Evaluations across datasets and embedding setups show that Warfare achieves high success rates while preserving content quality. We further introduce Warfare-Plus, which enhances efficiency without compromising effectiveness. The code can be found inthis https URL.
View on arXiv@article{li2025_2310.07726, title={ Warfare:Breaking the Watermark Protection of AI-Generated Content }, author={ Guanlin Li and Yifei Chen and Jie Zhang and Shangwei Guo and Han Qiu and Guoyin Wang and Jiwei Li and Tianwei Zhang }, journal={arXiv preprint arXiv:2310.07726}, year={ 2025 } }