Trading-off Accuracy and Communication Cost in Federated Learning

Leveraging the training-by-pruning paradigm introduced by Zhou et al. and Isik et al. introduced a federated learning protocol that achieves a 34-fold reduction in communication cost. We achieve a compression improvements of orders of orders of magnitude over the state-of-the-art. The central idea of our framework is to encode the network weights by a the vector of trainable parameters , such that where is a carefully-generate sparse random matrix (that remains fixed throughout training). In such framework, the previous work of Zhou et al. [NeurIPS'19] is retrieved when is diagonal and has the same dimension of . We instead show that can effectively be chosen much smaller than , while retaining the same accuracy at the price of a decrease of the sparsity of . Since server and clients only need to share , such a trade-off leads to a substantial improvement in communication cost. Moreover, we provide theoretical insight into our framework and establish a novel link between training-by-sampling and random convex geometry.
View on arXiv@article{villani2025_2503.14246, title={ Trading-off Accuracy and Communication Cost in Federated Learning }, author={ Mattia Jacopo Villani and Emanuele Natale and Frederik Mallmann-Trenn }, journal={arXiv preprint arXiv:2503.14246}, year={ 2025 } }