CODE ACROSTIC: Robust Watermarking for Code Generation
- WaLM

Watermarking large language models (LLMs) is vital for preventing their misuse, including the fabrication of fake news, plagiarism, and spam. It is especially important to watermark LLM-generated code, as it often contains intellectualthis http URL, we found that existing methods for watermarking LLM-generated code fail to address comment removalthis http URLsuch cases, an attacker can simply remove the comments from the generated code without affecting its functionality, significantly reducing the effectiveness of current code-watermarkingthis http URLthe other hand, injecting a watermark into code is challenging because, as previous works have noted, most code represents a low-entropy scenario compared to natural language. Our approach to addressing this issue involves leveraging prior knowledge to distinguish between low-entropy and high-entropy parts of the code, as indicated by a Cue List ofthis http URLthen inject the watermark guided by this Cue List, achieving higher detectability and usability than existingthis http URLevaluated our proposed method on HumanEvaland compared our method with three state-of-the-art code watermarking techniques. The results demonstrate the effectiveness of our approach.
View on arXiv