36
0

Performance Guaranteed Poisoning Attacks in Federated Learning: A Sliding Mode Approach

Abstract

Manipulation of local training data and local updates, i.e., the poisoning attack, is the main threat arising from the collaborative nature of the federated learning (FL) paradigm. Most existing poisoning attacks aim to manipulate local data/models in a way that causes denial-of-service (DoS) issues. In this paper, we introduce a novel attack method, named Federated Learning Sliding Attack (FedSA) scheme, aiming at precisely introducing the extent of poisoning in a subtle controlled manner. It operates with a predefined objective, such as reducing global model's prediction accuracy by 10%. FedSA integrates robust nonlinear control-Sliding Mode Control (SMC) theory with model poisoning attacks. It can manipulate the updates from malicious clients to drive the global model towards a compromised state, achieving this at a controlled and inconspicuous rate. Additionally, leveraging the robust control properties of FedSA allows precise control over the convergence bounds, enabling the attacker to set the global accuracy of the poisoned model to any desired level. Experimental results demonstrate that FedSA can accurately achieve a predefined global accuracy with fewer malicious clients while maintaining a high level of stealth and adjustable learning rates.

View on arXiv
@article{pan2025_2505.16403,
  title={ Performance Guaranteed Poisoning Attacks in Federated Learning: A Sliding Mode Approach },
  author={ Huazi Pan and Yanjun Zhang and Leo Yu Zhang and Scott Adams and Abbas Kouzani and Suiyang Khoo },
  journal={arXiv preprint arXiv:2505.16403},
  year={ 2025 }
}
Comments on this paper