ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.00062
34
0

CRFU: Compressive Representation Forgetting Against Privacy Leakage on Machine Unlearning

27 February 2025
Weiqi Wang
Chenhan Zhang
Zhiyi Tian
Shushu Liu
Shui Yu
    MU
ArXivPDFHTML
Abstract

Machine unlearning allows data owners to erase the impact of their specified data from trained models. Unfortunately, recent studies have shown that adversaries can recover the erased data, posing serious threats to user privacy. An effective unlearning method removes the information of the specified data from the trained model, resulting in different outputs for the same input before and after unlearning. Adversaries can exploit these output differences to conduct privacy leakage attacks, such as reconstruction and membership inference attacks. However, directly applying traditional defenses to unlearning leads to significant model utility degradation. In this paper, we introduce a Compressive Representation Forgetting Unlearning scheme (CRFU), designed to safeguard against privacy leakage on unlearning. CRFU achieves data erasure by minimizing the mutual information between the trained compressive representation (learned through information bottleneck theory) and the erased data, thereby maximizing the distortion of data. This ensures that the model's output contains less information that adversaries can exploit. Furthermore, we introduce a remembering constraint and an unlearning rate to balance the forgetting of erased data with the preservation of previously learned knowledge, thereby reducing accuracy degradation. Theoretical analysis demonstrates that CRFU can effectively defend against privacy leakage attacks. Our experimental results show that CRFU significantly increases the reconstruction mean square error (MSE), achieving a defense effect improvement of approximately 200%200\%200% against privacy reconstruction attacks with only 1.5%1.5\%1.5% accuracy degradation on MNIST.

View on arXiv
@article{wang2025_2503.00062,
  title={ CRFU: Compressive Representation Forgetting Against Privacy Leakage on Machine Unlearning },
  author={ Weiqi Wang and Chenhan Zhang and Zhiyi Tian and Shushu Liu and Shui Yu },
  journal={arXiv preprint arXiv:2503.00062},
  year={ 2025 }
}
Comments on this paper