ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2407.09958
42
1

Partner in Crime: Boosting Targeted Poisoning Attacks against Federated Learning

13 July 2024
Shihua Sun
Shridatt Sugrim
Angelos Stavrou
Haining Wang
    AAML
ArXivPDFHTML
Abstract

Federated Learning (FL) exposes vulnerabilities to targeted poisoning attacks that aim to cause misclassification specifically from the source class to the target class. However, using well-established defense frameworks, the poisoning impact of these attacks can be greatly mitigated. We introduce a generalized pre-training stage approach to Boost Targeted Poisoning Attacks against FL, called BoTPA. Its design rationale is to leverage the model update contributions of all data points, including ones outside of the source and target classes, to construct an Amplifier set, in which we falsify the data labels before the FL training process, as a means to boost attacks. We comprehensively evaluate the effectiveness and compatibility of BoTPA on various targeted poisoning attacks. Under data poisoning attacks, our evaluations reveal that BoTPA can achieve a median Relative Increase in Attack Success Rate (RI-ASR) between 15.3% and 36.9% across all possible source-target class combinations, with varying percentages of malicious clients, compared to its baseline. In the context of model poisoning, BoTPA attains RI-ASRs ranging from 13.3% to 94.7% in the presence of the Krum and Multi-Krum defenses, from 2.6% to 49.2% under the Median defense, and from 2.9% to 63.5% under the Flame defense.

View on arXiv
@article{sun2025_2407.09958,
  title={ Partner in Crime: Boosting Targeted Poisoning Attacks against Federated Learning },
  author={ Shihua Sun and Shridatt Sugrim and Angelos Stavrou and Haining Wang },
  journal={arXiv preprint arXiv:2407.09958},
  year={ 2025 }
}
Comments on this paper