ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2110.10342
  4. Cited By
Minibatch vs Local SGD with Shuffling: Tight Convergence Bounds and
  Beyond
v1v2 (latest)

Minibatch vs Local SGD with Shuffling: Tight Convergence Bounds and Beyond

20 October 2021
Chulhee Yun
Shashank Rajput
S. Sra
    FedML
ArXiv (abs)PDFHTMLGithub

Papers citing "Minibatch vs Local SGD with Shuffling: Tight Convergence Bounds and Beyond"

28 / 28 papers shown
Aspiration-based Perturbed Learning Automata in Games with Noisy Utility Measurements. Part B: Stochastic Stability in Weakly Acyclic Games
Aspiration-based Perturbed Learning Automata in Games with Noisy Utility Measurements. Part B: Stochastic Stability in Weakly Acyclic GamesEuropean Control Conference (ECC), 2018
Georgios C. Chasparis
156
1
0
23 Nov 2025
Understanding Outer Optimizers in Local SGD: Learning Rates, Momentum, and Acceleration
Understanding Outer Optimizers in Local SGD: Learning Rates, Momentum, and Acceleration
Ahmed Khaled
Satyen Kale
Arthur Douillard
Chi Jin
Rob Fergus
Manzil Zaheer
303
3
0
12 Sep 2025
Adjusted Shuffling SARAH: Advancing Complexity Analysis via Dynamic Gradient Weighting
Adjusted Shuffling SARAH: Advancing Complexity Analysis via Dynamic Gradient Weighting
Duc Toan Nguyen
Trang H. Tran
Lam M. Nguyen
216
0
0
14 Jun 2025
Generalization in Federated Learning: A Conditional Mutual Information Framework
Generalization in Federated Learning: A Conditional Mutual Information Framework
Ziqiao Wang
Cheng Long
Yongyi Mao
FedML
416
2
0
06 Mar 2025
A Unified Analysis of Federated Learning with Arbitrary Client Participation
A Unified Analysis of Federated Learning with Arbitrary Client ParticipationNeural Information Processing Systems (NeurIPS), 2022
Maroun Touma
Mingyue Ji
FedML
746
80
0
31 Dec 2024
A Statistical Analysis of Deep Federated Learning for Intrinsically Low-dimensional Data
A Statistical Analysis of Deep Federated Learning for Intrinsically Low-dimensional Data
Saptarshi Chakraborty
Peter L. Bartlett
FedML
455
1
0
28 Oct 2024
Does Worst-Performing Agent Lead the Pack? Analyzing Agent Dynamics in
  Unified Distributed SGD
Does Worst-Performing Agent Lead the Pack? Analyzing Agent Dynamics in Unified Distributed SGDNeural Information Processing Systems (NeurIPS), 2024
Jie Hu
Yi-Ting Ma
Do Young Eun
FedML
394
2
0
26 Sep 2024
High Probability Guarantees for Random Reshuffling
High Probability Guarantees for Random Reshuffling
Hengxu Yu
Xiao Li
UQCV
354
4
0
20 Nov 2023
Minibatch and Local SGD: Algorithmic Stability and Linear Speedup in Generalization
Minibatch and Local SGD: Algorithmic Stability and Linear Speedup in Generalization
Yunwen Lei
Tao Sun
Mingrui Liu
575
4
0
02 Oct 2023
Empirical Risk Minimization with Shuffled SGD: A Primal-Dual Perspective
  and Improved Bounds
Empirical Risk Minimization with Shuffled SGD: A Primal-Dual Perspective and Improved Bounds
Xu Cai
Cheuk Yin Lin
Jelena Diakonikolas
FedML
316
6
0
21 Jun 2023
Distributed Random Reshuffling Methods with Improved Convergence
Distributed Random Reshuffling Methods with Improved ConvergenceIEEE Transactions on Automatic Control (TAC), 2023
Kun-Yen Huang
Linli Zhou
Shi Pu
412
5
0
21 Jun 2023
Convergence of AdaGrad for Non-convex Objectives: Simple Proofs and
  Relaxed Assumptions
Convergence of AdaGrad for Non-convex Objectives: Simple Proofs and Relaxed AssumptionsAnnual Conference Computational Learning Theory (COLT), 2023
Bo Wang
Huishuai Zhang
Zhirui Ma
Wei Chen
479
74
0
29 May 2023
Tighter Lower Bounds for Shuffling SGD: Random Permutations and Beyond
Tighter Lower Bounds for Shuffling SGD: Random Permutations and BeyondInternational Conference on Machine Learning (ICML), 2023
Jaeyoung Cha
Jaewook Lee
Chulhee Yun
380
26
0
13 Mar 2023
On the Training Instability of Shuffling SGD with Batch Normalization
On the Training Instability of Shuffling SGD with Batch NormalizationInternational Conference on Machine Learning (ICML), 2023
David Wu
Chulhee Yun
S. Sra
517
6
0
24 Feb 2023
Federated Minimax Optimization with Client Heterogeneity
Federated Minimax Optimization with Client Heterogeneity
Pranay Sharma
Rohan Panda
Gauri Joshi
FedML
346
10
0
08 Feb 2023
Federated Learning with Regularized Client Participation
Federated Learning with Regularized Client Participation
Grigory Malinovsky
Samuel Horváth
Konstantin Burlachenko
Peter Richtárik
FedML
387
19
0
07 Feb 2023
On the Convergence of Federated Averaging with Cyclic Client
  Participation
On the Convergence of Federated Averaging with Cyclic Client ParticipationInternational Conference on Machine Learning (ICML), 2023
Yae Jee Cho
Pranay Sharma
Gauri Joshi
Zheng Xu
Satyen Kale
Tong Zhang
FedML
291
47
0
06 Feb 2023
Coordinating Distributed Example Orders for Provably Accelerated
  Training
Coordinating Distributed Example Orders for Provably Accelerated TrainingNeural Information Processing Systems (NeurIPS), 2023
A. Feder Cooper
Wentao Guo
Khiem Pham
Tiancheng Yuan
Charlie F. Ruan
Yucheng Lu
Chris De Sa
590
9
0
02 Feb 2023
SGDA with shuffling: faster convergence for nonconvex-PŁ minimax
  optimization
SGDA with shuffling: faster convergence for nonconvex-PŁ minimax optimizationInternational Conference on Learning Representations (ICLR), 2022
Hanseul Cho
Chulhee Yun
268
10
0
12 Oct 2022
Efficiency Ordering of Stochastic Gradient Descent
Efficiency Ordering of Stochastic Gradient DescentNeural Information Processing Systems (NeurIPS), 2022
Jie Hu
Vishwaraj Doshi
Do Young Eun
255
8
0
15 Sep 2022
Accelerated Federated Learning with Decoupled Adaptive Optimization
Accelerated Federated Learning with Decoupled Adaptive OptimizationInternational Conference on Machine Learning (ICML), 2022
Jiayin Jin
Jiaxiang Ren
Yang Zhou
Lingjuan Lyu
Ji Liu
Dejing Dou
AI4CEFedML
312
61
0
14 Jul 2022
Federated Optimization Algorithms with Random Reshuffling and Gradient
  Compression
Federated Optimization Algorithms with Random Reshuffling and Gradient Compression
Abdurakhmon Sadiev
Grigory Malinovsky
Eduard A. Gorbunov
Igor Sokolov
Ahmed Khaled
Konstantin Burlachenko
Peter Richtárik
FedML
513
24
0
14 Jun 2022
Anchor Sampling for Federated Learning with Partial Client Participation
Anchor Sampling for Federated Learning with Partial Client ParticipationInternational Conference on Machine Learning (ICML), 2022
Feijie Wu
Song Guo
Zhihao Qu
Shiqi He
Ziming Liu
Jing Gao
FedML
279
27
0
13 Jun 2022
Sampling without Replacement Leads to Faster Rates in Finite-Sum Minimax
  Optimization
Sampling without Replacement Leads to Faster Rates in Finite-Sum Minimax OptimizationNeural Information Processing Systems (NeurIPS), 2022
Aniket Das
Bernhard Schölkopf
Michael Muehlebach
339
10
0
07 Jun 2022
FedShuffle: Recipes for Better Use of Local Work in Federated Learning
FedShuffle: Recipes for Better Use of Local Work in Federated Learning
Samuel Horváth
Maziar Sanjabi
Lin Xiao
Peter Richtárik
Michael G. Rabbat
FedML
354
23
0
27 Apr 2022
Correlated quantization for distributed mean estimation and optimization
Correlated quantization for distributed mean estimation and optimizationInternational Conference on Machine Learning (ICML), 2022
A. Suresh
Ziteng Sun
Jae Hun Ro
Felix X. Yu
374
18
0
09 Mar 2022
Characterizing & Finding Good Data Orderings for Fast Convergence of
  Sequential Gradient Methods
Characterizing & Finding Good Data Orderings for Fast Convergence of Sequential Gradient Methods
Amirkeivan Mohtashami
Sebastian U. Stich
Martin Jaggi
338
14
0
03 Feb 2022
Server-Side Stepsizes and Sampling Without Replacement Provably Help in
  Federated Optimization
Server-Side Stepsizes and Sampling Without Replacement Provably Help in Federated Optimization
Grigory Malinovsky
Konstantin Mishchenko
Peter Richtárik
FedML
187
30
0
26 Jan 2022
1
Page 1 of 1