Unlearning as multi-task optimization: A normalized gradient difference approach with an adaptive learning rate

Machine unlearning has been used to remove unwanted knowledge acquired by large language models (LLMs). In this paper, we examine machine unlearning from an optimization perspective, framing it as a regularized multi-task optimization problem, where one task optimizes a forgetting objective and another optimizes the model performance. In particular, we introduce a normalized gradient difference (NGDiff) algorithm, enabling us to have better control over the trade-off between the objectives, while integrating a new, automatic learning rate scheduler. We provide a theoretical analysis and empirically demonstrate the superior performance of NGDiff among state-of-the-art unlearning methods on the TOFU and MUSE datasets while exhibiting stable training.
View on arXiv@article{bu2025_2410.22086, title={ Unlearning as multi-task optimization: A normalized gradient difference approach with an adaptive learning rate }, author={ Zhiqi Bu and Xiaomeng Jin and Bhanukiran Vinzamuri and Anil Ramakrishna and Kai-Wei Chang and Volkan Cevher and Mingyi Hong }, journal={arXiv preprint arXiv:2410.22086}, year={ 2025 } }