ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2012.05640
  4. Cited By
Asymptotic study of stochastic adaptive algorithm in non-convex
  landscape
v1v2 (latest)

Asymptotic study of stochastic adaptive algorithm in non-convex landscape

Journal of machine learning research (JMLR), 2020
10 December 2020
S. Gadat
Ioana Gavra
ArXiv (abs)PDFHTML

Papers citing "Asymptotic study of stochastic adaptive algorithm in non-convex landscape"

15 / 15 papers shown
An Energy-Based Self-Adaptive Learning Rate for Stochastic Gradient
  Descent: Enhancing Unconstrained Optimization with VAV method
An Energy-Based Self-Adaptive Learning Rate for Stochastic Gradient Descent: Enhancing Unconstrained Optimization with VAV method
Jiahao Zhang
Christian Moya
Guang Lin
345
0
0
10 Nov 2024
Provable Complexity Improvement of AdaGrad over SGD: Upper and Lower Bounds in Stochastic Non-Convex Optimization
Provable Complexity Improvement of AdaGrad over SGD: Upper and Lower Bounds in Stochastic Non-Convex OptimizationAnnual Conference Computational Learning Theory (COLT), 2024
Devyani Maladkar
Ruichen Jiang
Aryan Mokhtari
514
6
0
07 Jun 2024
Why Transformers Need Adam: A Hessian Perspective
Why Transformers Need Adam: A Hessian Perspective
Yushun Zhang
Congliang Chen
Tian Ding
Ziniu Li
Tian Ding
Zhimin Luo
501
103
0
26 Feb 2024
On Adaptive Stochastic Optimization for Streaming Data: A Newton's
  Method with O(dN) Operations
On Adaptive Stochastic Optimization for Streaming Data: A Newton's Method with O(dN) Operations
Antoine Godichon-Baggioni
Nicklas Werge
ODL
316
5
0
29 Nov 2023
Convergence of Adam for Non-convex Objectives: Relaxed Hyperparameters
  and Non-ergodic Case
Convergence of Adam for Non-convex Objectives: Relaxed Hyperparameters and Non-ergodic CaseMachine-mediated learning (ML), 2023
Meixuan He
Yuqing Liang
Jinlan Liu
Dongpo Xu
296
18
0
20 Jul 2023
Convergence of AdaGrad for Non-convex Objectives: Simple Proofs and
  Relaxed Assumptions
Convergence of AdaGrad for Non-convex Objectives: Simple Proofs and Relaxed AssumptionsAnnual Conference Computational Learning Theory (COLT), 2023
Bo Wang
Huishuai Zhang
Zhirui Ma
Wei Chen
470
73
0
29 May 2023
Convergence of Adam Under Relaxed Assumptions
Convergence of Adam Under Relaxed AssumptionsNeural Information Processing Systems (NeurIPS), 2023
Haochuan Li
Alexander Rakhlin
Ali Jadbabaie
563
105
0
27 Apr 2023
Non asymptotic analysis of Adaptive stochastic gradient algorithms and
  applications
Non asymptotic analysis of Adaptive stochastic gradient algorithms and applications
Antoine Godichon-Baggioni
Pierre Tarrago
200
5
0
01 Mar 2023
Provable Adaptivity of Adam under Non-uniform Smoothness
Provable Adaptivity of Adam under Non-uniform SmoothnessKnowledge Discovery and Data Mining (KDD), 2022
Bohan Wang
Yushun Zhang
Huishuai Zhang
Qi Meng
Tian Ding
Zhirui Ma
Tie-Yan Liu
Zhimin Luo
Wei Chen
265
33
0
21 Aug 2022
Adam Can Converge Without Any Modification On Update Rules
Adam Can Converge Without Any Modification On Update RulesNeural Information Processing Systems (NeurIPS), 2022
Yushun Zhang
Congliang Chen
Naichen Shi
Tian Ding
Zhimin Luo
567
93
0
20 Aug 2022
The Power of Adaptivity in SGD: Self-Tuning Step Sizes with Unbounded
  Gradients and Affine Variance
The Power of Adaptivity in SGD: Self-Tuning Step Sizes with Unbounded Gradients and Affine VarianceAnnual Conference Computational Learning Theory (COLT), 2022
Matthew Faw
Isidoros Tziotis
Constantine Caramanis
Aryan Mokhtari
Sanjay Shakkottai
Rachel A. Ward
266
71
0
11 Feb 2022
On the Convergence of mSGD and AdaGrad for Stochastic Optimization
On the Convergence of mSGD and AdaGrad for Stochastic OptimizationInternational Conference on Learning Representations (ICLR), 2022
Ruinan Jin
Yu Xing
Xingkang He
172
12
0
26 Jan 2022
A theoretical and empirical study of new adaptive algorithms with
  additional momentum steps and shifted updates for stochastic non-convex
  optimization
A theoretical and empirical study of new adaptive algorithms with additional momentum steps and shifted updates for stochastic non-convex optimization
C. Alecsa
252
1
0
16 Oct 2021
Stochastic Subgradient Descent on a Generic Definable Function Converges to a Minimizer
S. Schechtman
310
2
0
06 Sep 2021
Stochastic optimization with momentum: convergence, fluctuations, and
  traps avoidance
Stochastic optimization with momentum: convergence, fluctuations, and traps avoidance
Anas Barakat
Pascal Bianchi
W. Hachem
S. Schechtman
354
16
0
07 Dec 2020
1
Page 1 of 1