OmniLearn: A Framework for Distributed Deep Learning over Heterogeneous Clusters

Deep learning systems are optimized for clusters with homogeneous resources. However, heterogeneity is prevalent in computing infrastructure across edge, cloud and HPC. When training neural networks using stochastic gradient descent techniques on heterogeneous resources, performance degrades due to stragglers and stale updates. In this work, we develop an adaptive batch-scaling framework called OmniLearn to mitigate the effects of heterogeneity in distributed training. Our approach is inspired by proportional controllers to balance computation across heterogeneous servers, and works under varying resource availability. By dynamically adjusting worker mini-batches at runtime, OmniLearn reduces training time by 14-85%. We also investigate asynchronous training, where our techniques improve accuracy by up to 6.9%.
View on arXiv@article{tyagi2025_2503.17469, title={ OmniLearn: A Framework for Distributed Deep Learning over Heterogeneous Clusters }, author={ Sahil Tyagi and Prateek Sharma }, journal={arXiv preprint arXiv:2503.17469}, year={ 2025 } }