46
0

ReLearn: Unlearning via Learning for Large Language Models

Abstract

Current unlearning methods for large language models usually rely on reverse optimization to reduce target token probabilities. However, this paradigm disrupts the subsequent tokens prediction, degrading model performance and linguistic coherence. Moreover, existing evaluation metrics overemphasize contextual forgetting while inadequately assessing response fluency and relevance. To address these challenges, we propose ReLearn, a data augmentation and fine-tuning pipeline for effective unlearning, along with a comprehensive evaluation framework. This framework introduces Knowledge Forgetting Rate (KFR) and Knowledge Retention Rate (KRR) to measure knowledge-level preservation, and Linguistic Score (LS) to evaluate generation quality. Our experiments show that ReLearn successfully achieves targeted forgetting while preserving high-quality output. Through mechanistic analysis, we further demonstrate how reverse optimization disrupts coherent text generation, while ReLearn preserves this essential capability. Code is available atthis https URL.

View on arXiv
@article{xu2025_2502.11190,
  title={ ReLearn: Unlearning via Learning for Large Language Models },
  author={ Haoming Xu and Ningyuan Zhao and Liming Yang and Sendong Zhao and Shumin Deng and Mengru Wang and Bryan Hooi and Nay Oo and Huajun Chen and Ningyu Zhang },
  journal={arXiv preprint arXiv:2502.11190},
  year={ 2025 }
}
Comments on this paper