REVS: Unlearning Sensitive Information in Language Models via Rank Editing in the Vocabulary Space

Language models (LMs) risk inadvertently memorizing and divulging sensitive or personally identifiable information (PII) seen in training data, causing privacy concerns. Current approaches to address this issue involve costly dataset scrubbing, or model filtering through unlearning and model editing, which can be bypassed through extraction attacks. We propose REVS, a novel non-gradient-based method for unlearning sensitive information from LMs. REVS identifies and modifies a small subset of neurons relevant for constituent tokens that form sensitive information. To adequately evaluate our method on truly sensitive information, we curate three datasets: email and URL datasets naturally memorized by the models, and a synthetic social security number dataset that we tune the models to memorize. Compared to other methods, REVS demonstrates superior performance in unlearning sensitive information and robustness to extraction attacks, while retaining underlying model integrity.
View on arXiv@article{ashuach2025_2406.09325, title={ REVS: Unlearning Sensitive Information in Language Models via Rank Editing in the Vocabulary Space }, author={ Tomer Ashuach and Martin Tutek and Yonatan Belinkov }, journal={arXiv preprint arXiv:2406.09325}, year={ 2025 } }