ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1904.11325
  4. Cited By
Communication trade-offs for synchronized distributed SGD with large
  step size

Communication trade-offs for synchronized distributed SGD with large step size

25 April 2019
Kumar Kshitij Patel
Aymeric Dieuleveut
    FedML
ArXivPDFHTML

Papers citing "Communication trade-offs for synchronized distributed SGD with large step size"

8 / 8 papers shown
Title
FedDC: Federated Learning with Non-IID Data via Local Drift Decoupling
  and Correction
FedDC: Federated Learning with Non-IID Data via Local Drift Decoupling and Correction
Liang Gao
H. Fu
Li Li
Yingwen Chen
Minghua Xu
Chengzhong Xu
FedML
21
242
0
22 Mar 2022
Towards Federated Learning on Time-Evolving Heterogeneous Data
Towards Federated Learning on Time-Evolving Heterogeneous Data
Yongxin Guo
Tao R. Lin
Xiaoying Tang
FedML
14
30
0
25 Dec 2021
Faster Non-Convex Federated Learning via Global and Local Momentum
Faster Non-Convex Federated Learning via Global and Local Momentum
Rudrajit Das
Anish Acharya
Abolfazl Hashemi
Sujay Sanghavi
Inderjit S. Dhillon
Ufuk Topcu
FedML
29
82
0
07 Dec 2020
A Unified Theory of Decentralized SGD with Changing Topology and Local
  Updates
A Unified Theory of Decentralized SGD with Changing Topology and Local Updates
Anastasia Koloskova
Nicolas Loizou
Sadra Boreiri
Martin Jaggi
Sebastian U. Stich
FedML
41
491
0
23 Mar 2020
Don't Use Large Mini-Batches, Use Local SGD
Don't Use Large Mini-Batches, Use Local SGD
Tao R. Lin
Sebastian U. Stich
Kumar Kshitij Patel
Martin Jaggi
39
429
0
22 Aug 2018
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp
  Minima
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
281
2,889
0
15 Sep 2016
Stochastic Gradient Descent for Non-smooth Optimization: Convergence
  Results and Optimal Averaging Schemes
Stochastic Gradient Descent for Non-smooth Optimization: Convergence Results and Optimal Averaging Schemes
Ohad Shamir
Tong Zhang
101
570
0
08 Dec 2012
Optimal Distributed Online Prediction using Mini-Batches
Optimal Distributed Online Prediction using Mini-Batches
O. Dekel
Ran Gilad-Bachrach
Ohad Shamir
Lin Xiao
171
683
0
07 Dec 2010
1