ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2306.11497
  4. Cited By
Convergence and concentration properties of constant step-size SGD
  through Markov chains

Convergence and concentration properties of constant step-size SGD through Markov chains

20 June 2023
Ibrahim Merad
Stéphane Gaïffas
ArXivPDFHTML

Papers citing "Convergence and concentration properties of constant step-size SGD through Markov chains"

9 / 9 papers shown
Title
A Piecewise Lyapunov Analysis of Sub-quadratic SGD: Applications to Robust and Quantile Regression
A Piecewise Lyapunov Analysis of Sub-quadratic SGD: Applications to Robust and Quantile Regression
Yixuan Zhang
Dongyan
Yudong Chen
Qiaomin Xie
24
0
0
11 Apr 2025
Online Inference for Quantiles by Constant Learning-Rate Stochastic Gradient Descent
Ziyang Wei
Jiaqi Li
Likai Chen
W. Wu
48
0
0
04 Mar 2025
Coupling-based Convergence Diagnostic and Stepsize Scheme for Stochastic
  Gradient Descent
Coupling-based Convergence Diagnostic and Stepsize Scheme for Stochastic Gradient Descent
Xiang Li
Qiaomin Xie
76
0
0
15 Dec 2024
Nonasymptotic Analysis of Stochastic Gradient Descent with the Richardson-Romberg Extrapolation
Nonasymptotic Analysis of Stochastic Gradient Descent with the Richardson-Romberg Extrapolation
Marina Sheshukova
Denis Belomestny
Alain Durmus
Eric Moulines
Alexey Naumov
S. Samsonov
33
1
0
07 Oct 2024
The Effect of SGD Batch Size on Autoencoder Learning: Sparsity,
  Sharpness, and Feature Learning
The Effect of SGD Batch Size on Autoencoder Learning: Sparsity, Sharpness, and Feature Learning
Nikhil Ghosh
Spencer Frei
Wooseok Ha
Ting Yu
MLT
32
3
0
06 Aug 2023
A High Probability Analysis of Adaptive SGD with Momentum
A High Probability Analysis of Adaptive SGD with Momentum
Xiaoyun Li
Francesco Orabona
87
65
0
28 Jul 2020
Linear Convergence of Gradient and Proximal-Gradient Methods Under the
  Polyak-Łojasiewicz Condition
Linear Convergence of Gradient and Proximal-Gradient Methods Under the Polyak-Łojasiewicz Condition
Hamed Karimi
J. Nutini
Mark W. Schmidt
133
1,198
0
16 Aug 2016
A simpler approach to obtaining an O(1/t) convergence rate for the
  projected stochastic subgradient method
A simpler approach to obtaining an O(1/t) convergence rate for the projected stochastic subgradient method
Simon Lacoste-Julien
Mark W. Schmidt
Francis R. Bach
121
259
0
10 Dec 2012
Stochastic Gradient Descent for Non-smooth Optimization: Convergence
  Results and Optimal Averaging Schemes
Stochastic Gradient Descent for Non-smooth Optimization: Convergence Results and Optimal Averaging Schemes
Ohad Shamir
Tong Zhang
99
570
0
08 Dec 2012
1