ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2012.05640
  4. Cited By
Asymptotic study of stochastic adaptive algorithm in non-convex
  landscape

Asymptotic study of stochastic adaptive algorithm in non-convex landscape

10 December 2020
S. Gadat
Ioana Gavra
ArXivPDFHTML

Papers citing "Asymptotic study of stochastic adaptive algorithm in non-convex landscape"

17 / 17 papers shown
Title
An Energy-Based Self-Adaptive Learning Rate for Stochastic Gradient
  Descent: Enhancing Unconstrained Optimization with VAV method
An Energy-Based Self-Adaptive Learning Rate for Stochastic Gradient Descent: Enhancing Unconstrained Optimization with VAV method
Jiahao Zhang
Christian Moya
Guang Lin
48
0
0
10 Nov 2024
Convergence Analysis of Adaptive Gradient Methods under Refined
  Smoothness and Noise Assumptions
Convergence Analysis of Adaptive Gradient Methods under Refined Smoothness and Noise Assumptions
Devyani Maladkar
Ruichen Jiang
Aryan Mokhtari
54
6
0
07 Jun 2024
Why Transformers Need Adam: A Hessian Perspective
Why Transformers Need Adam: A Hessian Perspective
Yushun Zhang
Congliang Chen
Tian Ding
Ziniu Li
Ruoyu Sun
Zhimin Luo
40
43
0
26 Feb 2024
On Adaptive Stochastic Optimization for Streaming Data: A Newton's
  Method with O(dN) Operations
On Adaptive Stochastic Optimization for Streaming Data: A Newton's Method with O(dN) Operations
Antoine Godichon-Baggioni
Nicklas Werge
ODL
45
3
0
29 Nov 2023
Convergence of Adam for Non-convex Objectives: Relaxed Hyperparameters
  and Non-ergodic Case
Convergence of Adam for Non-convex Objectives: Relaxed Hyperparameters and Non-ergodic Case
Meixuan He
Yuqing Liang
Jinlan Liu
Dongpo Xu
30
9
0
20 Jul 2023
Convergence of AdaGrad for Non-convex Objectives: Simple Proofs and
  Relaxed Assumptions
Convergence of AdaGrad for Non-convex Objectives: Simple Proofs and Relaxed Assumptions
Bo Wang
Huishuai Zhang
Zhirui Ma
Wei Chen
42
51
0
29 May 2023
Convergence of Adam Under Relaxed Assumptions
Convergence of Adam Under Relaxed Assumptions
Haochuan Li
Alexander Rakhlin
Ali Jadbabaie
39
57
0
27 Apr 2023
Non asymptotic analysis of Adaptive stochastic gradient algorithms and
  applications
Non asymptotic analysis of Adaptive stochastic gradient algorithms and applications
Antoine Godichon-Baggioni
Pierre Tarrago
35
5
0
01 Mar 2023
Provable Adaptivity of Adam under Non-uniform Smoothness
Provable Adaptivity of Adam under Non-uniform Smoothness
Bohan Wang
Yushun Zhang
Huishuai Zhang
Qi Meng
Ruoyu Sun
Zhirui Ma
Tie-Yan Liu
Zhimin Luo
Wei Chen
32
25
0
21 Aug 2022
Adam Can Converge Without Any Modification On Update Rules
Adam Can Converge Without Any Modification On Update Rules
Yushun Zhang
Congliang Chen
Naichen Shi
Ruoyu Sun
Zhimin Luo
23
63
0
20 Aug 2022
The Power of Adaptivity in SGD: Self-Tuning Step Sizes with Unbounded
  Gradients and Affine Variance
The Power of Adaptivity in SGD: Self-Tuning Step Sizes with Unbounded Gradients and Affine Variance
Matthew Faw
Isidoros Tziotis
Constantine Caramanis
Aryan Mokhtari
Sanjay Shakkottai
Rachel A. Ward
35
60
0
11 Feb 2022
On the Convergence of mSGD and AdaGrad for Stochastic Optimization
On the Convergence of mSGD and AdaGrad for Stochastic Optimization
Ruinan Jin
Yu Xing
Xingkang He
24
11
0
26 Jan 2022
A theoretical and empirical study of new adaptive algorithms with
  additional momentum steps and shifted updates for stochastic non-convex
  optimization
A theoretical and empirical study of new adaptive algorithms with additional momentum steps and shifted updates for stochastic non-convex optimization
C. Alecsa
38
0
0
16 Oct 2021
Stochastic Subgradient Descent on a Generic Definable Function Converges to a Minimizer
S. Schechtman
35
1
0
06 Sep 2021
Stochastic optimization with momentum: convergence, fluctuations, and
  traps avoidance
Stochastic optimization with momentum: convergence, fluctuations, and traps avoidance
Anas Barakat
Pascal Bianchi
W. Hachem
S. Schechtman
39
13
0
07 Dec 2020
First-order Methods Almost Always Avoid Saddle Points
First-order Methods Almost Always Avoid Saddle Points
Jason D. Lee
Ioannis Panageas
Georgios Piliouras
Max Simchowitz
Michael I. Jordan
Benjamin Recht
ODL
97
83
0
20 Oct 2017
A Differential Equation for Modeling Nesterov's Accelerated Gradient
  Method: Theory and Insights
A Differential Equation for Modeling Nesterov's Accelerated Gradient Method: Theory and Insights
Weijie Su
Stephen P. Boyd
Emmanuel J. Candes
110
1,157
0
04 Mar 2015
1