35
0

Enhancing Parallelism in Decentralized Stochastic Convex Optimization

Main:9 Pages
6 Figures
Bibliography:3 Pages
3 Tables
Appendix:13 Pages
Abstract

Decentralized learning has emerged as a powerful approach for handling large datasets across multiple machines in a communication-efficient manner. However, such methods often face scalability limitations, as increasing the number of machines beyond a certain point negatively impacts convergence rates. In this work, we propose Decentralized Anytime SGD, a novel decentralized learning algorithm that significantly extends the critical parallelism threshold, enabling the effective use of more machines without compromising performance. Within the stochastic convex optimization (SCO) framework, we establish a theoretical upper bound on parallelism that surpasses the current state-of-the-art, allowing larger networks to achieve favorable statistical guarantees and closing the gap with centralized learning in highly connected topologies.

View on arXiv
@article{eisen2025_2506.00961,
  title={ Enhancing Parallelism in Decentralized Stochastic Convex Optimization },
  author={ Ofri Eisen and Ron Dorfman and Kfir Y. Levy },
  journal={arXiv preprint arXiv:2506.00961},
  year={ 2025 }
}
Comments on this paper