72
0

Protocol Models: Scaling Decentralized Training with Communication-Efficient Model Parallelism

Main:15 Pages
17 Figures
Bibliography:5 Pages
5 Tables
Appendix:20 Pages
Abstract

Scaling models has led to significant advancements in deep learning, but training these models in decentralized settings remains challenging due to communication bottlenecks. While existing compression techniques are effective in data-parallel, they do not extend to model parallelism. Unlike data-parallel training, where weight gradients are exchanged, model-parallel requires compressing activations and activation gradients as they propagate through layers, accumulating compression errors. We propose a novel compression algorithm that compresses both forward and backward passes, enabling up to 99% compression with no convergence degradation with negligible memory/compute overhead. By leveraging a recursive structure in transformer networks, we predefine a low-dimensional subspace to confine the activations and gradients, allowing full reconstruction in subsequent layers. Our method achieves up to 100x improvement in communication efficiency and enables training billion-parameter-scale models over low-end GPUs connected via consumer-grade internet speeds as low as 80Mbps, matching the convergence of centralized datacenter systems with 100Gbps connections with model parallel.

View on arXiv
@article{ramasinghe2025_2506.01260,
  title={ Protocol Models: Scaling Decentralized Training with Communication-Efficient Model Parallelism },
  author={ Sameera Ramasinghe and Thalaiyasingam Ajanthan and Gil Avraham and Yan Zuo and Alexander Long },
  journal={arXiv preprint arXiv:2506.01260},
  year={ 2025 }
}
Comments on this paper