251

Unmasking Efficiency: Learning Salient Sparse Models in Non-IID Federated Learning

Main:13 Pages
8 Figures
Bibliography:8 Pages
7 Tables
Appendix:10 Pages
Abstract

In this work, we propose Salient Sparse Federated Learning (SSFL), a streamlined approach for sparse federated learning with efficient communication. SSFL identifies a sparse subnetwork prior to training, leveraging parameter saliency scores computed separately on local client data in non-IID scenarios, and then aggregated, to determine a global mask. Only the sparse model weights are communicated each round between the clients and the server. We validate SSFL's effectiveness using standard non-IID benchmarks, noting marked improvements in the sparsity--accuracy trade-offs. Finally, we deploy our method in a real-world federated learning framework and report improvement in communication time.

View on arXiv
Comments on this paper