ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2207.08486
15
3

Using Anomaly Detection to Detect Poisoning Attacks in Federated Learning Applications

18 July 2022
Ali Raza
Shujun Li
K. Tran
L. Koehl
Kim Duc Tran
    AAML
ArXivPDFHTML
Abstract

Adversarial attacks such as poisoning attacks have attracted the attention of many machine learning researchers. Traditionally, poisoning attacks attempt to inject adversarial training data in order to manipulate the trained model. In federated learning (FL), data poisoning attacks can be generalized to model poisoning attacks, which cannot be detected by simpler methods due to the lack of access to local training data by the detector. State-of-the-art poisoning attack detection methods for FL have various weaknesses, e.g., the number of attackers has to be known or not high enough, working with i.i.d. data only, and high computational complexity. To overcome above weaknesses, we propose a novel framework for detecting poisoning attacks in FL, which employs a reference model based on a public dataset and an auditor model to detect malicious updates. We implemented a detector based on the proposed framework and using a one-class support vector machine (OC-SVM), which reaches the lowest possible computational complexity O(K) where K is the number of clients. We evaluated our detector's performance against state-of-the-art (SOTA) poisoning attacks for two typical applications of FL: electrocardiograph (ECG) classification and human activity recognition (HAR). Our experimental results validated the performance of our detector over other SOTA detection methods.

View on arXiv
@article{raza2025_2207.08486,
  title={ Using Anomaly Detection to Detect Poisoning Attacks in Federated Learning Applications },
  author={ Ali Raza and Shujun Li and Kim-Phuc Tran and Ludovic Koehl and Kim Duc Tran },
  journal={arXiv preprint arXiv:2207.08486},
  year={ 2025 }
}
Comments on this paper