37
0

PRUNE: A Patching Based Repair Framework for Certiffable Unlearning of Neural Networks

Abstract

It is often desirable to remove (a.k.a. unlearn) a speciffc part of the training data from a trained neural network model. A typical application scenario is to protect the data holder's right to be forgotten, which has been promoted by many recent regulation rules. Existing unlearning methods involve training alternative models with remaining data, which may be costly and challenging to verify from the data holder or a thirdparty auditor's perspective. In this work, we provide a new angle and propose a novel unlearning approach by imposing carefully crafted "patch" on the original neural network to achieve targeted "forgetting" of the requested data to delete. Speciffcally, inspired by the research line of neural network repair, we propose to strategically seek a lightweight minimum "patch" for unlearning a given data point with certiffable guarantee. Furthermore, to unlearn a considerable amount of data points (or an entire class), we propose to iteratively select a small subset of representative data points to unlearn, which achieves the effect of unlearning the whole set. Extensive experiments on multiple categorical datasets demonstrates our approach's effectiveness, achieving measurable unlearning while preserving the model's performance and being competitive in efffciency and memory consumption compared to various baseline methods.

View on arXiv
@article{li2025_2505.06520,
  title={ PRUNE: A Patching Based Repair Framework for Certiffable Unlearning of Neural Networks },
  author={ Xuran Li and Jingyi Wang and Xiaohan Yuan and Peixin Zhang and Zhan Qin and Zhibo Wang and Kui Ren },
  journal={arXiv preprint arXiv:2505.06520},
  year={ 2025 }
}
Comments on this paper