666
v1v2v3v4v5 (latest)

Sparsification Under Siege: Dual-Level Defense Against Poisoning in Communication-Efficient Federated Learning

Main:6 Pages
8 Figures
Bibliography:2 Pages
4 Tables
Appendix:7 Pages
Abstract

Gradient sparsification, while mitigating communication bottlenecks in Federated Learning (FL), fundamentally alters the geometric landscape of model updates. We reveal that the resultant high-dimensional orthogonality renders traditional Euclidean-based robust aggregation metrics mathematically ambiguous, creating a 'sparsity-robustness trade-off' that adversaries exploit to bypass detection. To resolve this structural dissonance, we propose SafeSparse, a consensus restoration framework that decouples defense into topological and semantic dimensions. Unlike prior arts that treat sparsification and security orthogonally, SafeSparse introduces: (1) a Structure-Aware Calibration mechanism utilizing Jaccard similarity to filter topological outliers induced by index poisoning; and (2) a Directional Semantic Alignment module employing density-based clustering on update signs to neutralize magnitude-invariant attacks. Theoretically, we establish convergence guarantees for SafeSparse. Extensive experiments across multiple datasets and attack scenarios demonstrate that SafeSparse recovers up to 25.7% global accuracy under coordinated poisoning, effectively closing the vulnerability gap in communication-efficient FL.

View on arXiv
Comments on this paper