ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2410.13907
17
0

NSmark: Null Space Based Black-box Watermarking Defense Framework for Language Models

16 October 2024
Haodong Zhao
Jinming Hu
Peixuan Li
Fangqi Li
Jinrui Sha
Peixuan Chen
Zhuosheng Zhang
Gongshen Liu
Gongshen Liu
    AAML
ArXivPDFHTML
Abstract

Language models (LMs) have emerged as critical intellectual property (IP) assets that necessitate protection. Although various watermarking strategies have been proposed, they remain vulnerable to Linear Functionality Equivalence Attack (LFEA), which can invalidate most existing white-box watermarks without prior knowledge of the watermarking scheme or training data. This paper analyzes and extends the attack scenarios of LFEA to the commonly employed black-box settings for LMs by considering Last-Layer outputs (dubbed LL-LFEA). We discover that the null space of the output matrix remains invariant against LL-LFEA attacks. Based on this finding, we propose NSmark, a black-box watermarking scheme that is task-agnostic and capable of resisting LL-LFEA attacks. NSmark consists of three phases: (i) watermark generation using the digital signature of the owner, enhanced by spread spectrum modulation for increased robustness; (ii) watermark embedding through an output mapping extractor that preserves the LM performance while maximizing watermark capacity; (iii) watermark verification, assessed by extraction rate and null space conformity. Extensive experiments on both pre-training and downstream tasks confirm the effectiveness, scalability, reliability, fidelity, and robustness of our approach. Code is available atthis https URL.

View on arXiv
@article{zhao2025_2410.13907,
  title={ NSmark: Null Space Based Black-box Watermarking Defense Framework for Language Models },
  author={ Haodong Zhao and Jinming Hu and Peixuan Li and Fangqi Li and Jinrui Sha and Tianjie Ju and Peixuan Chen and Zhuosheng Zhang and Gongshen Liu },
  journal={arXiv preprint arXiv:2410.13907},
  year={ 2025 }
}
Comments on this paper