ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.06575
26
0

Defending LLM Watermarking Against Spoofing Attacks with Contrastive Representation Learning

9 April 2025
Li An
Yujian Liu
Y. Liu
Yang Zhang
Yuheng Bu
Shiyu Chang
    AAML
ArXivPDFHTML
Abstract

Watermarking has emerged as a promising technique for detecting texts generated by LLMs. Current research has primarily focused on three design criteria: high quality of the watermarked text, high detectability, and robustness against removal attack. However, the security against spoofing attacks remains relatively understudied. For example, a piggyback attack can maliciously alter the meaning of watermarked text-transforming it into hate speech-while preserving the original watermark, thereby damaging the reputation of the LLM provider. We identify two core challenges that make defending against spoofing difficult: (1) the need for watermarks to be both sensitive to semantic-distorting changes and insensitive to semantic-preserving edits, and (2) the contradiction between the need to detect global semantic shifts and the local, auto-regressive nature of most watermarking schemes. To address these challenges, we propose a semantic-aware watermarking algorithm that post-hoc embeds watermarks into a given target text while preserving its original meaning. Our method introduces a semantic mapping model, which guides the generation of a green-red token list, contrastively trained to be sensitive to semantic-distorting changes and insensitive to semantic-preserving changes. Experiments on two standard benchmarks demonstrate strong robustness against removal attacks and security against spoofing attacks, including sentiment reversal and toxic content insertion, while maintaining high watermark detectability. Our approach offers a significant step toward more secure and semantically aware watermarking for LLMs. Our code is available atthis https URL.

View on arXiv
@article{an2025_2504.06575,
  title={ Defending LLM Watermarking Against Spoofing Attacks with Contrastive Representation Learning },
  author={ Li An and Yujian Liu and Yepeng Liu and Yang Zhang and Yuheng Bu and Shiyu Chang },
  journal={arXiv preprint arXiv:2504.06575},
  year={ 2025 }
}
Comments on this paper