ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2106.06880
  4. Cited By
Random Shuffling Beats SGD Only After Many Epochs on Ill-Conditioned
  Problems

Random Shuffling Beats SGD Only After Many Epochs on Ill-Conditioned Problems

12 June 2021
Itay Safran
Ohad Shamir
ArXivPDFHTML

Papers citing "Random Shuffling Beats SGD Only After Many Epochs on Ill-Conditioned Problems"

15 / 15 papers shown
Title
Rapid Overfitting of Multi-Pass Stochastic Gradient Descent in Stochastic Convex Optimization
Rapid Overfitting of Multi-Pass Stochastic Gradient Descent in Stochastic Convex Optimization
Shira Vansover-Hager
Tomer Koren
Roi Livni
20
0
0
13 May 2025
Randomized Asymmetric Chain of LoRA: The First Meaningful Theoretical
  Framework for Low-Rank Adaptation
Randomized Asymmetric Chain of LoRA: The First Meaningful Theoretical Framework for Low-Rank Adaptation
Grigory Malinovsky
Umberto Michieli
Hasan Hammoud
Taha Ceritli
Hayder Elesedy
Mete Ozay
Peter Richtárik
AI4CE
22
1
0
10 Oct 2024
Demystifying SGD with Doubly Stochastic Gradients
Demystifying SGD with Doubly Stochastic Gradients
Kyurae Kim
Joohwan Ko
Yian Ma
Jacob R. Gardner
48
0
0
03 Jun 2024
On the Last-Iterate Convergence of Shuffling Gradient Methods
On the Last-Iterate Convergence of Shuffling Gradient Methods
Zijian Liu
Zhengyuan Zhou
23
2
0
12 Mar 2024
Convergence of Sign-based Random Reshuffling Algorithms for Nonconvex
  Optimization
Convergence of Sign-based Random Reshuffling Algorithms for Nonconvex Optimization
Zhen Qin
Zhishuai Liu
Pan Xu
11
1
0
24 Oct 2023
Mini-Batch Optimization of Contrastive Loss
Mini-Batch Optimization of Contrastive Loss
Jaewoong Cho
Kartik K. Sreenivasan
Keon Lee
Kyunghoo Mun
Soheun Yi
Jeong-Gwan Lee
Anna Lee
Jy-yong Sohn
Dimitris Papailiopoulos
Kangwook Lee
SSL
35
7
0
12 Jul 2023
On Convergence of Incremental Gradient for Non-Convex Smooth Functions
On Convergence of Incremental Gradient for Non-Convex Smooth Functions
Anastasia Koloskova
N. Doikov
Sebastian U. Stich
Martin Jaggi
29
2
0
30 May 2023
Fast Convergence of Random Reshuffling under Over-Parameterization and
  the Polyak-Łojasiewicz Condition
Fast Convergence of Random Reshuffling under Over-Parameterization and the Polyak-Łojasiewicz Condition
Chen Fan
Christos Thrampoulidis
Mark W. Schmidt
20
2
0
02 Apr 2023
On the Convergence of Federated Averaging with Cyclic Client
  Participation
On the Convergence of Federated Averaging with Cyclic Client Participation
Yae Jee Cho
Pranay Sharma
Gauri Joshi
Zheng Xu
Satyen Kale
Tong Zhang
FedML
22
27
0
06 Feb 2023
Federated Optimization Algorithms with Random Reshuffling and Gradient
  Compression
Federated Optimization Algorithms with Random Reshuffling and Gradient Compression
Abdurakhmon Sadiev
Grigory Malinovsky
Eduard A. Gorbunov
Igor Sokolov
Ahmed Khaled
Konstantin Burlachenko
Peter Richtárik
FedML
11
21
0
14 Jun 2022
Sampling without Replacement Leads to Faster Rates in Finite-Sum Minimax
  Optimization
Sampling without Replacement Leads to Faster Rates in Finite-Sum Minimax Optimization
Aniket Das
Bernhard Schölkopf
Michael Muehlebach
19
9
0
07 Jun 2022
Benign Underfitting of Stochastic Gradient Descent
Benign Underfitting of Stochastic Gradient Descent
Tomer Koren
Roi Livni
Yishay Mansour
Uri Sherman
MLT
8
13
0
27 Feb 2022
Permutation-Based SGD: Is Random Optimal?
Permutation-Based SGD: Is Random Optimal?
Shashank Rajput
Kangwook Lee
Dimitris Papailiopoulos
14
14
0
19 Feb 2021
Proximal and Federated Random Reshuffling
Proximal and Federated Random Reshuffling
Konstantin Mishchenko
Ahmed Khaled
Peter Richtárik
FedML
16
31
0
12 Feb 2021
Stochastic Gradient Descent for Non-smooth Optimization: Convergence
  Results and Optimal Averaging Schemes
Stochastic Gradient Descent for Non-smooth Optimization: Convergence Results and Optimal Averaging Schemes
Ohad Shamir
Tong Zhang
99
571
0
08 Dec 2012
1