ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.15097
54
3

LUME: LLM Unlearning with Multitask Evaluations

20 February 2025
Anil Ramakrishna
Yixin Wan
Xiaomeng Jin
Kai-Wei Chang
Zhiqi Bu
Bhanukiran Vinzamuri
V. Cevher
Mingyi Hong
Rahul Gupta
    CLL
    MU
ArXivPDFHTML
Abstract

Unlearning aims to remove copyrighted, sensitive, or private content from large language models (LLMs) without a full retraining. In this work, we develop a multi-task unlearning benchmark (LUME) which features three tasks: (1) unlearn synthetically generated creative short novels, (2) unlearn synthetic biographies with sensitive information, and (3) unlearn a collection of public biographies. We further release two fine-tuned LLMs of 1B and 7B parameter sizes as the target models. We conduct detailed evaluations of several recently proposed unlearning algorithms and present results on carefully crafted metrics to understand their behavior and limitations.

View on arXiv
@article{ramakrishna2025_2502.15097,
  title={ LUME: LLM Unlearning with Multitask Evaluations },
  author={ Anil Ramakrishna and Yixin Wan and Xiaomeng Jin and Kai-Wei Chang and Zhiqi Bu and Bhanukiran Vinzamuri and Volkan Cevher and Mingyi Hong and Rahul Gupta },
  journal={arXiv preprint arXiv:2502.15097},
  year={ 2025 }
}
Comments on this paper