250
v1v2 (latest)

SSFL: Discovering Sparse Unified Subnetworks at Initialization for Efficient Federated Learning

Main:13 Pages
8 Figures
Bibliography:8 Pages
7 Tables
Appendix:10 Pages
Abstract

In this work, we propose Salient Sparse Federated Learning (SSFL), a streamlined approach for sparse federated learning with efficient communication. SSFL identifies a sparse subnetwork prior to training, leveraging parameter saliency scores computed separately on local client data in non-IID scenarios, and then aggregated, to determine a global mask. Only the sparse model weights are trained and communicated each round between the clients and the server. On standard benchmarks including CIFAR-10, CIFAR-100, and Tiny-ImageNet, SSFL consistently improves the accuracy sparsity trade off, achieving more than 20\% relative error reduction on CIFAR-10 compared to the strongest sparse baseline, while reducing communication costs by 2×2 \times relative to dense FL. Finally, in a real-world federated learning deployment, SSFL delivers over 2.3×2.3 \times faster communication time, underscoring its practical efficiency.

View on arXiv
Comments on this paper