236

When unlearning is free: leveraging low influence points to reduce computational costs

Anat Kleiman
Robert Fisher
Ben Deaner
Udi Wieder
Main:12 Pages
14 Figures
Bibliography:2 Pages
2 Tables
Appendix:12 Pages
Abstract

As concerns around data privacy in machine learning grow, the ability to unlearn, or remove, specific data points from trained models becomes increasingly important. While state of the art unlearning methods have emerged in response, they typically treat all points in the forget set equally. In this work, we challenge this approach by asking whether points that have a negligible impact on the model's learning need to be removed. Through a comparative analysis of influence functions across language and vision tasks, we identify subsets of training data with negligible impact on model outputs. Leveraging this insight, we propose an efficient unlearning framework that reduces the size of datasets before unlearning leading to significant computational savings (up to approximately 50 percent) on real world empirical examples.

View on arXiv
Comments on this paper