ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2002.08246
  4. Cited By
A Unified Convergence Analysis for Shuffling-Type Gradient Methods

A Unified Convergence Analysis for Shuffling-Type Gradient Methods

19 February 2020
Lam M. Nguyen
Quoc Tran-Dinh
Dzung Phan
Phuong Ha Nguyen
Marten van Dijk
ArXivPDFHTML

Papers citing "A Unified Convergence Analysis for Shuffling-Type Gradient Methods"

50 / 53 papers shown
Title
Provably Faster Algorithms for Bilevel Optimization via
  Without-Replacement Sampling
Provably Faster Algorithms for Bilevel Optimization via Without-Replacement Sampling
Junyi Li
Heng Huang
34
1
0
07 Nov 2024
Shuffling Gradient-Based Methods for Nonconvex-Concave Minimax
  Optimization
Shuffling Gradient-Based Methods for Nonconvex-Concave Minimax Optimization
Quoc Tran-Dinh
Trang H. Tran
Lam M. Nguyen
32
0
0
29 Oct 2024
Randomized Asymmetric Chain of LoRA: The First Meaningful Theoretical
  Framework for Low-Rank Adaptation
Randomized Asymmetric Chain of LoRA: The First Meaningful Theoretical Framework for Low-Rank Adaptation
Grigory Malinovsky
Umberto Michieli
Hasan Hammoud
Taha Ceritli
Hayder Elesedy
Mete Ozay
Peter Richtárik
AI4CE
22
1
0
10 Oct 2024
JKO for Landau: a variational particle method for homogeneous Landau
  equation
JKO for Landau: a variational particle method for homogeneous Landau equation
Yan Huang
Li Wang
16
0
0
18 Sep 2024
Tracking solutions of time-varying variational inequalities
Tracking solutions of time-varying variational inequalities
Hédi Hadiji
Sarah Sachs
Cristóbal Guzmán
41
1
0
20 Jun 2024
A Generalized Version of Chung's Lemma and its Applications
A Generalized Version of Chung's Lemma and its Applications
Li Jiang
Xiao Li
Andre Milzarek
Junwen Qiu
30
1
0
09 Jun 2024
Demystifying SGD with Doubly Stochastic Gradients
Demystifying SGD with Doubly Stochastic Gradients
Kyurae Kim
Joohwan Ko
Yian Ma
Jacob R. Gardner
48
0
0
03 Jun 2024
On the Last-Iterate Convergence of Shuffling Gradient Methods
On the Last-Iterate Convergence of Shuffling Gradient Methods
Zijian Liu
Zhengyuan Zhou
26
2
0
12 Mar 2024
Last Iterate Convergence of Incremental Methods and Applications in
  Continual Learning
Last Iterate Convergence of Incremental Methods and Applications in Continual Learning
Xu Cai
Jelena Diakonikolas
23
5
0
11 Mar 2024
Shuffling Momentum Gradient Algorithm for Convex Optimization
Shuffling Momentum Gradient Algorithm for Convex Optimization
Trang H. Tran
Quoc Tran-Dinh
Lam M. Nguyen
13
1
0
05 Mar 2024
Mini-batch Gradient Descent with Buffer
Mini-batch Gradient Descent with Buffer
Haobo Qi
Du Huang
Yingqiu Zhu
Danyang Huang
Hansheng Wang
16
0
0
14 Dec 2023
RINAS: Training with Dataset Shuffling Can Be General and Fast
RINAS: Training with Dataset Shuffling Can Be General and Fast
Tianle Zhong
Jiechen Zhao
Xindi Guo
Qiang Su
Geoffrey C. Fox
24
4
0
04 Dec 2023
A New Random Reshuffling Method for Nonsmooth Nonconvex Finite-sum
  Optimization
A New Random Reshuffling Method for Nonsmooth Nonconvex Finite-sum Optimization
Junwen Qiu
Xiao Li
Andre Milzarek
24
0
0
02 Dec 2023
High Probability Guarantees for Random Reshuffling
High Probability Guarantees for Random Reshuffling
Hengxu Yu
Xiao Li
24
2
0
20 Nov 2023
AsGrad: A Sharp Unified Analysis of Asynchronous-SGD Algorithms
AsGrad: A Sharp Unified Analysis of Asynchronous-SGD Algorithms
Rustem Islamov
M. Safaryan
Dan Alistarh
FedML
13
12
0
31 Oct 2023
Convergence of Sign-based Random Reshuffling Algorithms for Nonconvex
  Optimization
Convergence of Sign-based Random Reshuffling Algorithms for Nonconvex Optimization
Zhen Qin
Zhishuai Liu
Pan Xu
11
1
0
24 Oct 2023
Demystifying the Myths and Legends of Nonconvex Convergence of SGD
Demystifying the Myths and Legends of Nonconvex Convergence of SGD
Aritra Dutta
El Houcine Bergou
Soumia Boucherouite
Nicklas Werge
M. Kandemir
Xin Li
18
0
0
19 Oct 2023
Mini-Batch Optimization of Contrastive Loss
Mini-Batch Optimization of Contrastive Loss
Jaewoong Cho
Kartik K. Sreenivasan
Keon Lee
Kyunghoo Mun
Soheun Yi
Jeong-Gwan Lee
Anna Lee
Jy-yong Sohn
Dimitris Papailiopoulos
Kangwook Lee
SSL
35
7
0
12 Jul 2023
On The Impact of Machine Learning Randomness on Group Fairness
On The Impact of Machine Learning Randomness on Group Fairness
Prakhar Ganesh
Hong Chang
Martin Strobel
Reza Shokri
FaML
10
30
0
09 Jul 2023
Ordering for Non-Replacement SGD
Ordering for Non-Replacement SGD
Yuetong Xu
Baharan Mirzasoleiman
8
0
0
28 Jun 2023
Empirical Risk Minimization with Shuffled SGD: A Primal-Dual Perspective
  and Improved Bounds
Empirical Risk Minimization with Shuffled SGD: A Primal-Dual Perspective and Improved Bounds
Xu Cai
Cheuk Yin Lin
Jelena Diakonikolas
FedML
26
5
0
21 Jun 2023
Distributed Random Reshuffling Methods with Improved Convergence
Distributed Random Reshuffling Methods with Improved Convergence
Kun-Yen Huang
Linli Zhou
Shi Pu
17
4
0
21 Jun 2023
On Convergence of Incremental Gradient for Non-Convex Smooth Functions
On Convergence of Incremental Gradient for Non-Convex Smooth Functions
Anastasia Koloskova
N. Doikov
Sebastian U. Stich
Martin Jaggi
29
2
0
30 May 2023
Fast Convergence of Random Reshuffling under Over-Parameterization and
  the Polyak-Łojasiewicz Condition
Fast Convergence of Random Reshuffling under Over-Parameterization and the Polyak-Łojasiewicz Condition
Chen Fan
Christos Thrampoulidis
Mark W. Schmidt
20
2
0
02 Apr 2023
On the Training Instability of Shuffling SGD with Batch Normalization
On the Training Instability of Shuffling SGD with Batch Normalization
David Wu
Chulhee Yun
S. Sra
24
4
0
24 Feb 2023
On the Convergence of Federated Averaging with Cyclic Client
  Participation
On the Convergence of Federated Averaging with Cyclic Client Participation
Yae Jee Cho
Pranay Sharma
Gauri Joshi
Zheng Xu
Satyen Kale
Tong Zhang
FedML
22
27
0
06 Feb 2023
Convergence of ease-controlled Random Reshuffling gradient Algorithms
  under Lipschitz smoothness
Convergence of ease-controlled Random Reshuffling gradient Algorithms under Lipschitz smoothness
R. Seccia
Corrado Coppola
G. Liuzzi
L. Palagi
8
2
0
04 Dec 2022
SGDA with shuffling: faster convergence for nonconvex-PŁ minimax
  optimization
SGDA with shuffling: faster convergence for nonconvex-PŁ minimax optimization
Hanseul Cho
Chulhee Yun
13
9
0
12 Oct 2022
On the Convergence to a Global Solution of Shuffling-Type Gradient
  Algorithms
On the Convergence to a Global Solution of Shuffling-Type Gradient Algorithms
Lam M. Nguyen
Trang H. Tran
19
2
0
13 Jun 2022
A Unified Convergence Theorem for Stochastic Optimization Methods
A Unified Convergence Theorem for Stochastic Optimization Methods
Xiao Li
Andre Milzarek
20
11
0
08 Jun 2022
Sampling without Replacement Leads to Faster Rates in Finite-Sum Minimax
  Optimization
Sampling without Replacement Leads to Faster Rates in Finite-Sum Minimax Optimization
Aniket Das
Bernhard Schölkopf
Michael Muehlebach
19
9
0
07 Jun 2022
Nesterov Accelerated Shuffling Gradient Method for Convex Optimization
Nesterov Accelerated Shuffling Gradient Method for Convex Optimization
Trang H. Tran
K. Scheinberg
Lam M. Nguyen
12
11
0
07 Feb 2022
Finite-Sum Optimization: A New Perspective for Convergence to a Global
  Solution
Finite-Sum Optimization: A New Perspective for Convergence to a Global Solution
Lam M. Nguyen
Trang H. Tran
Marten van Dijk
20
3
0
07 Feb 2022
Characterizing & Finding Good Data Orderings for Fast Convergence of
  Sequential Gradient Methods
Characterizing & Finding Good Data Orderings for Fast Convergence of Sequential Gradient Methods
Amirkeivan Mohtashami
Sebastian U. Stich
Martin Jaggi
11
13
0
03 Feb 2022
Distributed Random Reshuffling over Networks
Distributed Random Reshuffling over Networks
Kun-Yen Huang
Xiao Li
Andre Milzarek
Shi Pu
Junwen Qiu
25
11
0
31 Dec 2021
Random-reshuffled SARAH does not need a full gradient computations
Random-reshuffled SARAH does not need a full gradient computations
Aleksandr Beznosikov
Martin Takáč
6
7
0
26 Nov 2021
Convergence of Random Reshuffling Under The Kurdyka-Łojasiewicz
  Inequality
Convergence of Random Reshuffling Under The Kurdyka-Łojasiewicz Inequality
Xiao Li
Andre Milzarek
Junwen Qiu
15
19
0
10 Oct 2021
Optimal Rates for Random Order Online Optimization
Optimal Rates for Random Order Online Optimization
Uri Sherman
Tomer Koren
Yishay Mansour
9
8
0
29 Jun 2021
Random Shuffling Beats SGD Only After Many Epochs on Ill-Conditioned
  Problems
Random Shuffling Beats SGD Only After Many Epochs on Ill-Conditioned Problems
Itay Safran
Ohad Shamir
14
19
0
12 Jun 2021
Fast Distributionally Robust Learning with Variance Reduced Min-Max
  Optimization
Fast Distributionally Robust Learning with Variance Reduced Min-Max Optimization
Yaodong Yu
Tianyi Lin
Eric Mazumdar
Michael I. Jordan
OOD
14
22
0
27 Apr 2021
Random Reshuffling with Variance Reduction: New Analysis and Better
  Rates
Random Reshuffling with Variance Reduction: New Analysis and Better Rates
Grigory Malinovsky
Alibek Sailanbayev
Peter Richtárik
15
20
0
19 Apr 2021
Permutation-Based SGD: Is Random Optimal?
Permutation-Based SGD: Is Random Optimal?
Shashank Rajput
Kangwook Lee
Dimitris Papailiopoulos
14
14
0
19 Feb 2021
Recent Theoretical Advances in Non-Convex Optimization
Recent Theoretical Advances in Non-Convex Optimization
Marina Danilova
Pavel Dvurechensky
Alexander Gasnikov
Eduard A. Gorbunov
Sergey Guminov
Dmitry Kamzolov
Innokentiy Shibaev
11
75
0
11 Dec 2020
SMG: A Shuffling Gradient-Based Method with Momentum
SMG: A Shuffling Gradient-Based Method with Momentum
Trang H. Tran
Lam M. Nguyen
Quoc Tran-Dinh
4
20
0
24 Nov 2020
An Approximation Algorithm for Optimal Subarchitecture Extraction
An Approximation Algorithm for Optimal Subarchitecture Extraction
Adrian de Wynter
37
1
0
16 Oct 2020
An Algorithm for Learning Smaller Representations of Models With Scarce
  Data
An Algorithm for Learning Smaller Representations of Models With Scarce Data
Adrian de Wynter
33
2
0
15 Oct 2020
Incremental Without Replacement Sampling in Nonconvex Optimization
Incremental Without Replacement Sampling in Nonconvex Optimization
Edouard Pauwels
19
5
0
15 Jul 2020
Adaptive Gradient Methods Can Be Provably Faster than SGD after Finite
  Epochs
Adaptive Gradient Methods Can Be Provably Faster than SGD after Finite Epochs
Xunpeng Huang
Hao Zhou
Runxin Xu
Zhe Wang
Lei Li
ODL
17
2
0
12 Jun 2020
SGD with shuffling: optimal rates without component convexity and large
  epoch requirements
SGD with shuffling: optimal rates without component convexity and large epoch requirements
Kwangjun Ahn
Chulhee Yun
S. Sra
9
65
0
12 Jun 2020
Random Reshuffling: Simple Analysis with Vast Improvements
Random Reshuffling: Simple Analysis with Vast Improvements
Konstantin Mishchenko
Ahmed Khaled
Peter Richtárik
21
129
0
10 Jun 2020
12
Next