Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2006.06377
Cited By
STL-SGD: Speeding Up Local SGD with Stagewise Communication Period
11 June 2020
Shuheng Shen
Yifei Cheng
Jingchang Liu
Linli Xu
LRM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"STL-SGD: Speeding Up Local SGD with Stagewise Communication Period"
7 / 7 papers shown
Title
Revisiting LocalSGD and SCAFFOLD: Improved Rates and Missing Analysis
Ruichen Luo
Sebastian U Stich
Samuel Horváth
Martin Takáč
38
0
0
08 Jan 2025
EDiT: A Local-SGD-Based Efficient Distributed Training Method for Large Language Models
Jialiang Cheng
Ning Gao
Yun Yue
Zhiling Ye
Jiadi Jiang
Jian Sha
OffRL
77
0
0
10 Dec 2024
STSyn: Speeding Up Local SGD with Straggler-Tolerant Synchronization
Feng Zhu
Jingjing Zhang
Xin Eric Wang
26
3
0
06 Oct 2022
Threats to Federated Learning: A Survey
Lingjuan Lyu
Han Yu
Qiang Yang
FedML
193
434
0
04 Mar 2020
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
281
2,888
0
15 Sep 2016
Linear Convergence of Gradient and Proximal-Gradient Methods Under the Polyak-Łojasiewicz Condition
Hamed Karimi
J. Nutini
Mark W. Schmidt
133
1,198
0
16 Aug 2016
Optimal Distributed Online Prediction using Mini-Batches
O. Dekel
Ran Gilad-Bachrach
Ohad Shamir
Lin Xiao
171
683
0
07 Dec 2010
1