ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2507.00423
12
0

Find a Scapegoat: Poisoning Membership Inference Attack and Defense to Federated Learning

1 July 2025
Wenjin Mo
Zhiyuan Li
Minghong Fang
Mingwei Fang
    AAML
ArXiv (abs)PDFHTML
Main:8 Pages
5 Figures
Bibliography:2 Pages
18 Tables
Appendix:11 Pages
Abstract

Federated learning (FL) allows multiple clients to collaboratively train a global machine learning model with coordination from a central server, without needing to share their raw data. This approach is particularly appealing in the era of privacy regulations like the GDPR, leading many prominent companies to adopt it. However, FL's distributed nature makes it susceptible to poisoning attacks, where malicious clients, controlled by an attacker, send harmful data to compromise the model. Most existing poisoning attacks in FL aim to degrade the model's integrity, such as reducing its accuracy, with limited attention to privacy concerns from these attacks. In this study, we introduce FedPoisonMIA, a novel poisoning membership inference attack targeting FL. FedPoisonMIA involves malicious clients crafting local model updates to infer membership information. Additionally, we propose a robust defense mechanism to mitigate the impact of FedPoisonMIA attacks. Extensive experiments across various datasets demonstrate the attack's effectiveness, while our defense approach reduces its impact to a degree.

View on arXiv
@article{mo2025_2507.00423,
  title={ Find a Scapegoat: Poisoning Membership Inference Attack and Defense to Federated Learning },
  author={ Wenjin Mo and Zhiyuan Li and Minghong Fang and Mingwei Fang },
  journal={arXiv preprint arXiv:2507.00423},
  year={ 2025 }
}
Comments on this paper