24
0

P2Mark: Plug-and-play Parameter-level Watermarking for Neural Speech Generation

Abstract

Neural speech generation (NSG) has rapidly advanced as a key component of artificial intelligence-generated content, enabling the generation of high-quality, highly realistic speech for diverse applications. This development increases the risk of technique misuse and threatens social security. Audio watermarking can embed imperceptible marks into generated audio, providing a promising approach for secure NSG usage. However, current audio watermarking methods are mainly applied at the audio-level or feature-level, which are not suitable for open-sourced scenarios where source codes and model weights are released. To address this limitation, we propose a Plug-and-play Parameter-level WaterMarking (P2Mark) method for NSG. Specifically, we embed watermarks into the released model weights, offering a reliable solution for proactively tracing and protecting model copyrights in open-source scenarios. During training, we introduce a lightweight watermark adapter into the pre-trained model, allowing watermark information to be merged into the model via this adapter. This design ensures both the flexibility to modify the watermark before model release and the security of embedding the watermark within model parameters after model release. Meanwhile, we propose a gradient orthogonal projection optimization strategy to ensure the quality of the generated audio and the accuracy of watermark preservation. Experimental results on two mainstream waveform decoders in NSG (i.e., vocoder and codec) demonstrate that P2Mark achieves comparable performance to state-of-the-art audio watermarking methods that are not applicable to open-source white-box protection scenarios, in terms of watermark extraction accuracy, watermark imperceptibility, and robustness.

View on arXiv
@article{ren2025_2504.05197,
  title={ P2Mark: Plug-and-play Parameter-level Watermarking for Neural Speech Generation },
  author={ Yong Ren and Jiangyan Yi and Tao Wang and Jianhua Tao and Zheng Lian and Zhengqi Wen and Chenxing Li and Ruibo Fu and Ye Bai and Xiaohui Zhang },
  journal={arXiv preprint arXiv:2504.05197},
  year={ 2025 }
}
Comments on this paper