19

Easy Data Unlearning Bench

Roy Rinberg
Pol Puigdemont
Martin Pawelczyk
Volkan Cevher
Main:4 Pages
2 Figures
Bibliography:1 Pages
Appendix:2 Pages
Abstract

Evaluating machine unlearning methods remains technically challenging, with recent benchmarks requiring complex setups and significant engineering overhead. We introduce a unified and extensible benchmarking suite that simplifies the evaluation of unlearning algorithms using the KLoM (KL divergence of Margins) metric. Our framework provides precomputed model ensembles, oracle outputs, and streamlined infrastructure for running evaluations out of the box. By standardizing setup and metrics, it enables reproducible, scalable, and fair comparison across unlearning methods. We aim for this benchmark to serve as a practical foundation for accelerating research and promoting best practices in machine unlearning. Our code and data are publicly available.

View on arXiv
Comments on this paper