ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2004.12571
30
25

Exploiting Defenses against GAN-Based Feature Inference Attacks in Federated Learning

27 April 2020
Xinjian Luo
Xiangqi Zhu
    FedML
ArXivPDFHTML
Abstract

Federated learning (FL) is a decentralized model training framework that aims to merge isolated data islands while maintaining data privacy. However, recent studies have revealed that Generative Adversarial Network (GAN) based attacks can be employed in FL to learn the distribution of private datasets and reconstruct recognizable images. In this paper, we exploit defenses against GAN-based attacks in FL and propose a framework, Anti-GAN, to prevent attackers from learning the real distribution of the victim's data. The core idea of Anti-GAN is to manipulate the visual features of private training images to make them indistinguishable to human eyes even restored by attackers. Specifically, Anti-GAN projects the private dataset onto a GAN's generator and combines the generated fake images with the actual images to create the training dataset, which is then used for federated model training. The experimental results demonstrate that Anti-GAN is effective in preventing attackers from learning the distribution of private images while causing minimal harm to the accuracy of the federated model.

View on arXiv
@article{luo2025_2004.12571,
  title={ Exploiting Defenses against GAN-Based Feature Inference Attacks in Federated Learning },
  author={ Xinjian Luo and Xianglong Zhang },
  journal={arXiv preprint arXiv:2004.12571},
  year={ 2025 }
}
Comments on this paper