Decoding FL Defenses: Systemization, Pitfalls, and Remedies

While the community has designed various defenses to counter the threat of poisoning attacks in Federated Learning (FL), there are no guidelines for evaluating these defenses. These defenses are prone to subtle pitfalls in their experimental setups that lead to a false sense of security, rendering them unsuitable for practical deployment. In this paper, we systematically understand, identify, and provide a better approach to address these challenges. First, we design a comprehensive systemization of FL defenses along three dimensions: i) how client updates are processed, ii) what the server knows, and iii) at what stage the defense is applied. Next, we thoroughly survey 50 top-tier defense papers and identify the commonly used components in their evaluation setups. Based on this survey, we uncover six distinct pitfalls and study their prevalence. For example, we discover that around 30% of these works solely use the intrinsically robust MNIST dataset, and 40% employ simplistic attacks, which may inadvertently portray their defense as robust. Using three representative defenses as case studies, we perform a critical reevaluation to study the impact of the identified pitfalls and show how they lead to incorrect conclusions about robustness. We provide actionable recommendations to help researchers overcome each pitfall.
View on arXiv@article{khan2025_2502.05211, title={ Decoding FL Defenses: Systemization, Pitfalls, and Remedies }, author={ Momin Ahmad Khan and Virat Shejwalkar and Yasra Chandio and Amir Houmansadr and Fatima Muhammad Anwar }, journal={arXiv preprint arXiv:2502.05211}, year={ 2025 } }