ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2406.10281
30
2

Watermarking Language Models with Error Correcting Codes

12 June 2024
Patrick Chao
Edgar Dobriban
Hamed Hassani
Hamed Hassani
    WaLM
ArXivPDFHTML
Abstract

Recent progress in large language models enables the creation of realistic machine-generated content. Watermarking is a promising approach to distinguish machine-generated text from human text, embedding statistical signals in the output that are ideally undetectable to humans. We propose a watermarking framework that encodes such signals through an error correcting code. Our method, termed robust binary code (RBC) watermark, introduces no distortion compared to the original probability distribution, and no noticeable degradation in quality. We evaluate our watermark on base and instruction fine-tuned models and find our watermark is robust to edits, deletions, and translations. We provide an information-theoretic perspective on watermarking, a powerful statistical test for detection and for generating p-values, and theoretical guarantees. Our empirical findings suggest our watermark is fast, powerful, and robust, comparing favorably to the state-of-the-art.

View on arXiv
@article{chao2025_2406.10281,
  title={ Watermarking Language Models with Error Correcting Codes },
  author={ Patrick Chao and Yan Sun and Edgar Dobriban and Hamed Hassani },
  journal={arXiv preprint arXiv:2406.10281},
  year={ 2025 }
}
Comments on this paper