ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.05064
39
0

WaterDrum: Watermarking for Data-centric Unlearning Metric

8 May 2025
Xinyang Lu
Xinyuan Niu
Gregory Kang Ruey Lau
Bui Thi Cam Nhung
Rachael Hwee Ling Sim
Fanyu Wen
Chuan-Sheng Foo
S. Ng
Bryan Kian Hsiang Low
    MU
ArXivPDFHTML
Abstract

Large language model (LLM) unlearning is critical in real-world applications where it is necessary to efficiently remove the influence of private, copyrighted, or harmful data from some users. However, existing utility-centric unlearning metrics (based on model utility) may fail to accurately evaluate the extent of unlearning in realistic settings such as when (a) the forget and retain set have semantically similar content, (b) retraining the model from scratch on the retain set is impractical, and/or (c) the model owner can improve the unlearning metric without directly performing unlearning on the LLM. This paper presents the first data-centric unlearning metric for LLMs called WaterDrum that exploits robust text watermarking for overcoming these limitations. We also introduce new benchmark datasets for LLM unlearning that contain varying levels of similar data points and can be used to rigorously evaluate unlearning algorithms using WaterDrum. Our code is available atthis https URLand our new benchmark datasets are released atthis https URL.

View on arXiv
@article{lu2025_2505.05064,
  title={ WaterDrum: Watermarking for Data-centric Unlearning Metric },
  author={ Xinyang Lu and Xinyuan Niu and Gregory Kang Ruey Lau and Bui Thi Cam Nhung and Rachael Hwee Ling Sim and Fanyu Wen and Chuan-Sheng Foo and See-Kiong Ng and Bryan Kian Hsiang Low },
  journal={arXiv preprint arXiv:2505.05064},
  year={ 2025 }
}
Comments on this paper