268

An All-Reduce Compatible Top-K Compressor for Communication-Efficient Distributed Learning

Main:7 Pages
2 Figures
Bibliography:1 Pages
2 Tables
Abstract

Communication remains a central bottleneck in large-scale distributed machine learning, and gradient sparsification has emerged as a promising strategy to alleviate this challenge. However, existing gradient compressors face notable limitations: Rand-KK discards structural information and performs poorly in practice, while Top-KK preserves informative entries but loses the contraction property and requires costly All-Gather operations. In this paper, we propose ARC-Top-KK, an {All-Reduce}-Compatible Top-KK compressor that aligns sparsity patterns across nodes using a lightweight sketch of the gradient, enabling index-free All-Reduce while preserving globally significant information. ARC-Top-KK is provably contractive and, when combined with momentum error feedback (EF21M), achieves linear speedup and sharper convergence rates than the original EF21M under standard assumptions. Empirically, ARC-Top-KK matches the accuracy of Top-KK while reducing wall-clock training time by up to 60.7\%, offering an efficient and scalable solution that combines the robustness of Rand-KK with the strong performance of Top-KK.

View on arXiv
Comments on this paper