ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.06985
23
0
v1v2 (latest)

Certified Unlearning for Neural Networks

8 June 2025
Anastasia Koloskova
Youssef Allouah
Animesh Jha
R. Guerraoui
Sanmi Koyejo
    MU
ArXiv (abs)PDFHTML
Main:9 Pages
4 Figures
Bibliography:2 Pages
16 Tables
Appendix:13 Pages
Abstract

We address the problem of machine unlearning, where the goal is to remove the influence of specific training data from a model upon request, motivated by privacy concerns and regulatory requirements such as the "right to be forgotten." Unfortunately, existing methods rely on restrictive assumptions or lack formal guarantees. To this end, we propose a novel method for certified machine unlearning, leveraging the connection between unlearning and privacy amplification by stochastic post-processing. Our method uses noisy fine-tuning on the retain data, i.e., data that does not need to be removed, to ensure provable unlearning guarantees. This approach requires no assumptions about the underlying loss function, making it broadly applicable across diverse settings. We analyze the theoretical trade-offs in efficiency and accuracy and demonstrate empirically that our method not only achieves formal unlearning guarantees but also performs effectively in practice, outperforming existing baselines. Our code is available atthis https URL

View on arXiv
@article{koloskova2025_2506.06985,
  title={ Certified Unlearning for Neural Networks },
  author={ Anastasia Koloskova and Youssef Allouah and Animesh Jha and Rachid Guerraoui and Sanmi Koyejo },
  journal={arXiv preprint arXiv:2506.06985},
  year={ 2025 }
}
Comments on this paper