567
v1v2v3 (latest)

Unlearning as multi-task optimization: A normalized gradient difference approach with an adaptive learning rate

North American Chapter of the Association for Computational Linguistics (NAACL), 2024
Main:14 Pages
8 Figures
Bibliography:5 Pages
6 Tables
Appendix:10 Pages
Abstract

Machine unlearning has been used to remove unwanted knowledge acquired by large language models (LLMs). In this paper, we examine machine unlearning from an optimization perspective, framing it as a regularized multi-task optimization problem, where one task optimizes a forgetting objective and another optimizes the model performance. In particular, we introduce a normalized gradient difference (NGDiff) algorithm, enabling us to have better control over the trade-off between the objectives, while integrating a new, automatic learning rate scheduler. We provide a theoretical analysis and empirically demonstrate the superior performance of NGDiff among state-of-the-art unlearning methods on the TOFU and MUSE datasets while exhibiting stable training.

View on arXiv
Comments on this paper