Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2002.10400
Cited By
Closing the convergence gap of SGD without replacement
24 February 2020
Shashank Rajput
Anant Gupta
Dimitris Papailiopoulos
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Closing the convergence gap of SGD without replacement"
28 / 28 papers shown
Title
Rapid Overfitting of Multi-Pass Stochastic Gradient Descent in Stochastic Convex Optimization
Shira Vansover-Hager
Tomer Koren
Roi Livni
41
0
0
13 May 2025
Better Rates for Random Task Orderings in Continual Linear Models
Itay Evron
Ran Levinstein
Matan Schliserman
Uri Sherman
Tomer Koren
Daniel Soudry
Nathan Srebro
CLL
44
0
0
06 Apr 2025
Low-Rank Thinning
Annabelle Michael Carrell
Albert Gong
Abhishek Shetty
Raaz Dwivedi
Lester W. Mackey
66
0
0
17 Feb 2025
Loss Gradient Gaussian Width based Generalization and Optimization Guarantees
A. Banerjee
Qiaobo Li
Yingxue Zhou
57
0
0
11 Jun 2024
High Probability Guarantees for Random Reshuffling
Hengxu Yu
Xiao Li
50
2
0
20 Nov 2023
Convergence of Sign-based Random Reshuffling Algorithms for Nonconvex Optimization
Zhen Qin
Zhishuai Liu
Pan Xu
31
1
0
24 Oct 2023
Tighter Lower Bounds for Shuffling SGD: Random Permutations and Beyond
Jaeyoung Cha
Jaewook Lee
Chulhee Yun
33
23
0
13 Mar 2023
On the Convergence of Federated Averaging with Cyclic Client Participation
Yae Jee Cho
Pranay Sharma
Gauri Joshi
Zheng Xu
Satyen Kale
Tong Zhang
FedML
49
28
0
06 Feb 2023
Adaptive Compression for Communication-Efficient Distributed Training
Maksim Makarenko
Elnur Gasanov
Rustem Islamov
Abdurakhmon Sadiev
Peter Richtárik
55
14
0
31 Oct 2022
SGDA with shuffling: faster convergence for nonconvex-PŁ minimax optimization
Hanseul Cho
Chulhee Yun
37
9
0
12 Oct 2022
Efficiency Ordering of Stochastic Gradient Descent
Jie Hu
Vishwaraj Doshi
Do Young Eun
36
6
0
15 Sep 2022
Federated Optimization Algorithms with Random Reshuffling and Gradient Compression
Abdurakhmon Sadiev
Grigory Malinovsky
Eduard A. Gorbunov
Igor Sokolov
Ahmed Khaled
Konstantin Burlachenko
Peter Richtárik
FedML
21
21
0
14 Jun 2022
Federated Random Reshuffling with Compression and Variance Reduction
Grigory Malinovsky
Peter Richtárik
FedML
36
10
0
08 May 2022
Benign Underfitting of Stochastic Gradient Descent
Tomer Koren
Roi Livni
Yishay Mansour
Uri Sherman
MLT
24
13
0
27 Feb 2022
Nesterov Accelerated Shuffling Gradient Method for Convex Optimization
Trang H. Tran
K. Scheinberg
Lam M. Nguyen
47
11
0
07 Feb 2022
Characterizing & Finding Good Data Orderings for Fast Convergence of Sequential Gradient Methods
Amirkeivan Mohtashami
Sebastian U. Stich
Martin Jaggi
26
13
0
03 Feb 2022
Optimal Rates for Random Order Online Optimization
Uri Sherman
Tomer Koren
Yishay Mansour
21
8
0
29 Jun 2021
Random Shuffling Beats SGD Only After Many Epochs on Ill-Conditioned Problems
Itay Safran
Ohad Shamir
33
19
0
12 Jun 2021
Can Single-Shuffle SGD be Better than Reshuffling SGD and GD?
Chulhee Yun
S. Sra
Ali Jadbabaie
33
10
0
12 Mar 2021
Permutation-Based SGD: Is Random Optimal?
Shashank Rajput
Kangwook Lee
Dimitris Papailiopoulos
28
14
0
19 Feb 2021
SMG: A Shuffling Gradient-Based Method with Momentum
Trang H. Tran
Lam M. Nguyen
Quoc Tran-Dinh
28
21
0
24 Nov 2020
Breaking the Communication-Privacy-Accuracy Trilemma
Wei-Ning Chen
Peter Kairouz
Ayfer Özgür
24
116
0
22 Jul 2020
Incremental Without Replacement Sampling in Nonconvex Optimization
Edouard Pauwels
38
5
0
15 Jul 2020
Variance Reduction via Accelerated Dual Averaging for Finite-Sum Optimization
Chaobing Song
Yong Jiang
Yi Ma
55
23
0
18 Jun 2020
SGD with shuffling: optimal rates without component convexity and large epoch requirements
Kwangjun Ahn
Chulhee Yun
S. Sra
16
66
0
12 Jun 2020
Random Reshuffling: Simple Analysis with Vast Improvements
Konstantin Mishchenko
Ahmed Khaled
Peter Richtárik
42
131
0
10 Jun 2020
A Unified Convergence Analysis for Shuffling-Type Gradient Methods
Lam M. Nguyen
Quoc Tran-Dinh
Dzung Phan
Phuong Ha Nguyen
Marten van Dijk
44
78
0
19 Feb 2020
How Good is SGD with Random Shuffling?
Itay Safran
Ohad Shamir
25
80
0
31 Jul 2019
1