ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.00917
31
0

AMUN: Adversarial Machine UNlearning

2 March 2025
A. Boroojeny
Hari Sundaram
Varun Chandrasekaran
    MU
    AAML
ArXivPDFHTML
Abstract

Machine unlearning, where users can request the deletion of a forget dataset, is becoming increasingly important because of numerous privacy regulations. Initial works on ``exact'' unlearning (e.g., retraining) incur large computational overheads. However, while computationally inexpensive, ``approximate'' methods have fallen short of reaching the effectiveness of exact unlearning: models produced fail to obtain comparable accuracy and prediction confidence on both the forget and test (i.e., unseen) dataset. Exploiting this observation, we propose a new unlearning method, Adversarial Machine UNlearning (AMUN), that outperforms prior state-of-the-art (SOTA) methods for image classification. AMUN lowers the confidence of the model on the forget samples by fine-tuning the model on their corresponding adversarial examples. Adversarial examples naturally belong to the distribution imposed by the model on the input space; fine-tuning the model on the adversarial examples closest to the corresponding forget samples (a) localizes the changes to the decision boundary of the model around each forget sample and (b) avoids drastic changes to the global behavior of the model, thereby preserving the model's accuracy on test samples. Using AMUN for unlearning a random 10%10\%10% of CIFAR-10 samples, we observe that even SOTA membership inference attacks cannot do better than random guessing.

View on arXiv
@article{ebrahimpour-boroojeny2025_2503.00917,
  title={ AMUN: Adversarial Machine UNlearning },
  author={ Ali Ebrahimpour-Boroojeny and Hari Sundaram and Varun Chandrasekaran },
  journal={arXiv preprint arXiv:2503.00917},
  year={ 2025 }
}
Comments on this paper