ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2202.04295
  4. Cited By
On Almost Sure Convergence Rates of Stochastic Gradient Methods

On Almost Sure Convergence Rates of Stochastic Gradient Methods

9 February 2022
Jun Liu
Ye Yuan
ArXivPDFHTML

Papers citing "On Almost Sure Convergence Rates of Stochastic Gradient Methods"

19 / 19 papers shown
Title
Beyond adaptive gradient: Fast-Controlled Minibatch Algorithm for
  large-scale optimization
Beyond adaptive gradient: Fast-Controlled Minibatch Algorithm for large-scale optimization
Corrado Coppola
Lorenzo Papa
Irene Amerini
L. Palagi
ODL
76
0
0
24 Nov 2024
A quantitative Robbins-Siegmund theorem
A quantitative Robbins-Siegmund theorem
Morenikeji Neri
Thomas Powell
21
2
0
21 Oct 2024
On the SAGA algorithm with decreasing step
On the SAGA algorithm with decreasing step
Luis Fredes
Bernard Bercu
Eméric Gbaguidi
21
1
0
02 Oct 2024
Almost sure convergence rates of stochastic gradient methods under gradient domination
Almost sure convergence rates of stochastic gradient methods under gradient domination
Simon Weissmann
Sara Klein
Waïss Azizian
Leif Döring
34
3
0
22 May 2024
Convergence Rates for Stochastic Approximation: Biased Noise with
  Unbounded Variance, and Applications
Convergence Rates for Stochastic Approximation: Biased Noise with Unbounded Variance, and Applications
R. Karandikar
M. Vidyasagar
25
8
0
05 Dec 2023
From Optimization to Control: Quasi Policy Iteration
From Optimization to Control: Quasi Policy Iteration
Mohammad Amin Sharifi Kolarijani
Peyman Mohajerin Esfahani
19
2
0
18 Nov 2023
The Effect of SGD Batch Size on Autoencoder Learning: Sparsity,
  Sharpness, and Feature Learning
The Effect of SGD Batch Size on Autoencoder Learning: Sparsity, Sharpness, and Feature Learning
Nikhil Ghosh
Spencer Frei
Wooseok Ha
Ting Yu
MLT
32
3
0
06 Aug 2023
Convergence of Adam for Non-convex Objectives: Relaxed Hyperparameters
  and Non-ergodic Case
Convergence of Adam for Non-convex Objectives: Relaxed Hyperparameters and Non-ergodic Case
Meixuan He
Yuqing Liang
Jinlan Liu
Dongpo Xu
15
8
0
20 Jul 2023
Stability and Convergence of Distributed Stochastic Approximations with
  large Unbounded Stochastic Information Delays
Stability and Convergence of Distributed Stochastic Approximations with large Unbounded Stochastic Information Delays
Adrian Redder
Arunselvan Ramaswamy
Holger Karl
13
1
0
11 May 2023
High-dimensional scaling limits and fluctuations of online least-squares
  SGD with smooth covariance
High-dimensional scaling limits and fluctuations of online least-squares SGD with smooth covariance
Krishnakumar Balasubramanian
Promit Ghosal
Ye He
28
5
0
03 Apr 2023
Practical and Matching Gradient Variance Bounds for Black-Box
  Variational Bayesian Inference
Practical and Matching Gradient Variance Bounds for Black-Box Variational Bayesian Inference
Kyurae Kim
Kaiwen Wu
Jisu Oh
J. Gardner
BDL
15
7
0
18 Mar 2023
Statistical Inference for Linear Functionals of Online SGD in High-dimensional Linear Regression
Statistical Inference for Linear Functionals of Online SGD in High-dimensional Linear Regression
Bhavya Agrawalla
Krishnakumar Balasubramanian
Promit Ghosal
23
2
0
20 Feb 2023
Almost Sure Saddle Avoidance of Stochastic Gradient Methods without the
  Bounded Gradient Assumption
Almost Sure Saddle Avoidance of Stochastic Gradient Methods without the Bounded Gradient Assumption
Jun Liu
Ye Yuan
ODL
9
1
0
15 Feb 2023
Fixing by Mixing: A Recipe for Optimal Byzantine ML under Heterogeneity
Fixing by Mixing: A Recipe for Optimal Byzantine ML under Heterogeneity
Youssef Allouah
Sadegh Farhadkhani
R. Guerraoui
Nirupam Gupta
Rafael Pinot
John Stephan
FedML
27
49
0
03 Feb 2023
Convergence of Batch Updating Methods with Approximate Gradients and/or
  Noisy Measurements: Theory and Computational Results
Convergence of Batch Updating Methods with Approximate Gradients and/or Noisy Measurements: Theory and Computational Results
Tadipatri Uday
M. Vidyasagar
11
0
0
12 Sep 2022
Emergent specialization from participation dynamics and multi-learner
  retraining
Emergent specialization from participation dynamics and multi-learner retraining
Sarah Dean
Mihaela Curmei
Lillian J. Ratliff
Jamie Morgenstern
Maryam Fazel
16
5
0
06 Jun 2022
Convergence of Batch Asynchronous Stochastic Approximation With
  Applications to Reinforcement Learning
Convergence of Batch Asynchronous Stochastic Approximation With Applications to Reinforcement Learning
R. Karandikar
M. Vidyasagar
16
0
0
08 Sep 2021
New Convergence Aspects of Stochastic Gradient Algorithms
New Convergence Aspects of Stochastic Gradient Algorithms
Lam M. Nguyen
Phuong Ha Nguyen
Peter Richtárik
K. Scheinberg
Martin Takáč
Marten van Dijk
23
65
0
10 Nov 2018
Stochastic Gradient Descent for Non-smooth Optimization: Convergence
  Results and Optimal Averaging Schemes
Stochastic Gradient Descent for Non-smooth Optimization: Convergence Results and Optimal Averaging Schemes
Ohad Shamir
Tong Zhang
99
570
0
08 Dec 2012
1