ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.17323
33
0

When to Forget? Complexity Trade-offs in Machine Unlearning

24 February 2025
Martin Van Waerebeke
Marco Lorenzi
Giovanni Neglia
Kevin Scaman
    MU
ArXivPDFHTML
Abstract

Machine Unlearning (MU) aims at removing the influence of specific data points from a trained model, striving to achieve this at a fraction of the cost of full model retraining. In this paper, we analyze the efficiency of unlearning methods and establish the first upper and lower bounds on minimax computation times for this problem, characterizing the performance of the most efficient algorithm against the most difficult objective function. Specifically, for strongly convex objective functions and under the assumption that the forget data is inaccessible to the unlearning method, we provide a phase diagram for the unlearning complexity ratio -- a novel metric that compares the computational cost of the best unlearning method to full model retraining. The phase diagram reveals three distinct regimes: one where unlearning at a reduced cost is infeasible, another where unlearning is trivial because adding noise suffices, and a third where unlearning achieves significant computational advantages over retraining. These findings highlight the critical role of factors such as data dimensionality, the number of samples to forget, and privacy constraints in determining the practical feasibility of unlearning.

View on arXiv
@article{waerebeke2025_2502.17323,
  title={ When to Forget? Complexity Trade-offs in Machine Unlearning },
  author={ Martin Van Waerebeke and Marco Lorenzi and Giovanni Neglia and Kevin Scaman },
  journal={arXiv preprint arXiv:2502.17323},
  year={ 2025 }
}
Comments on this paper