86
0

How Secure is Forgetting? Linking Machine Unlearning to Machine Learning Attacks

Abstract

As Machine Learning (ML) evolves, the complexity and sophistication of security threats against this paradigm continue to grow as well, threatening data privacy and model integrity. In response, Machine Unlearning (MU) is a recent technology that aims to remove the influence of specific data from a trained model, enabling compliance with privacy regulations and user requests. This can be done for privacy compliance (e.g., GDPR's right to be forgotten) or model refinement. However, the intersection between classical threats in ML and MU remains largely unexplored. In this Systematization of Knowledge (SoK), we provide a structured analysis of security threats in ML and their implications for MU. We analyze four major attack classes, namely, Backdoor Attacks, Membership Inference Attacks (MIA), Adversarial Attacks, and Inversion Attacks, we investigate their impact on MU and propose a novel classification based on how they are usually used in this context. Finally, we identify open challenges, including ethical considerations, and explore promising future research directions, paving the way for future research in secure and privacy-preserving Machine Unlearning.

View on arXiv
@article{p.2025_2503.20257,
  title={ How Secure is Forgetting? Linking Machine Unlearning to Machine Learning Attacks },
  author={ Muhammed Shafi K. P. and Serena Nicolazzo and Antonino Nocera and Vinod P },
  journal={arXiv preprint arXiv:2503.20257},
  year={ 2025 }
}
Comments on this paper