25
0

LLM Watermarking Using Mixtures and Statistical-to-Computational Gaps

Abstract

Given a text, can we determine whether it was generated by a large language model (LLM) or by a human? A widely studied approach to this problem is watermarking. We propose an undetectable and elementary watermarking scheme in the closed setting. Also, in the harder open setting, where the adversary has access to most of the model, we propose an unremovable watermarking scheme.

View on arXiv
@article{abdalla2025_2505.01484,
  title={ LLM Watermarking Using Mixtures and Statistical-to-Computational Gaps },
  author={ Pedro Abdalla and Roman Vershynin },
  journal={arXiv preprint arXiv:2505.01484},
  year={ 2025 }
}
Comments on this paper