Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2006.05752
Cited By
Anytime MiniBatch: Exploiting Stragglers in Online Distributed Optimization
10 June 2020
Nuwan S. Ferdinand
H. Al-Lawati
S. Draper
M. Nokleby
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Anytime MiniBatch: Exploiting Stragglers in Online Distributed Optimization"
11 / 11 papers shown
Title
OmniLearn: A Framework for Distributed Deep Learning over Heterogeneous Clusters
S. Tyagi
Prateek Sharma
73
0
0
21 Mar 2025
SignSGD with Federated Voting
Chanho Park
H. Vincent Poor
Namyoon Lee
FedML
42
1
0
25 Mar 2024
Taming Resource Heterogeneity In Distributed ML Training With Dynamic Batching
S. Tyagi
Prateek Sharma
29
22
0
20 May 2023
STSyn: Speeding Up Local SGD with Straggler-Tolerant Synchronization
Feng Zhu
Jingjing Zhang
Xin Eric Wang
33
3
0
06 Oct 2022
Lightweight Projective Derivative Codes for Compressed Asynchronous Gradient Descent
Pedro Soto
Ilia Ilmer
Haibin Guan
Jun Li
38
3
0
31 Jan 2022
Trade-offs of Local SGD at Scale: An Empirical Study
Jose Javier Gonzalez Ortiz
Jonathan Frankle
Michael G. Rabbat
Ari S. Morcos
Nicolas Ballas
FedML
43
19
0
15 Oct 2021
Decentralized optimization with non-identical sampling in presence of stragglers
Tharindu B. Adikari
S. Draper
45
1
0
25 Aug 2021
Anytime Minibatch with Delayed Gradients
H. Al-Lawati
S. Draper
33
0
0
15 Dec 2020
SlowMo: Improving Communication-Efficient Distributed SGD with Slow Momentum
Jianyu Wang
Vinayak Tantia
Nicolas Ballas
Michael G. Rabbat
25
200
0
01 Oct 2019
Robust and Communication-Efficient Collaborative Learning
Amirhossein Reisizadeh
Hossein Taheri
Aryan Mokhtari
Hamed Hassani
Ramtin Pedarsani
38
89
0
24 Jul 2019
Optimal Distributed Online Prediction using Mini-Batches
O. Dekel
Ran Gilad-Bachrach
Ohad Shamir
Lin Xiao
186
683
0
07 Dec 2010
1