ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2402.05155
  4. Cited By
Non-convergence to global minimizers for Adam and stochastic gradient
  descent optimization and constructions of local minimizers in the training of
  artificial neural networks

Non-convergence to global minimizers for Adam and stochastic gradient descent optimization and constructions of local minimizers in the training of artificial neural networks

7 February 2024
Arnulf Jentzen
Adrian Riekert
ArXivPDFHTML

Papers citing "Non-convergence to global minimizers for Adam and stochastic gradient descent optimization and constructions of local minimizers in the training of artificial neural networks"

5 / 5 papers shown
Title
Sharp higher order convergence rates for the Adam optimizer
Sharp higher order convergence rates for the Adam optimizer
Steffen Dereich
Arnulf Jentzen
Adrian Riekert
ODL
61
0
0
28 Apr 2025
Non-convergence to the optimal risk for Adam and stochastic gradient descent optimization in the training of deep neural networks
Thang Do
Arnulf Jentzen
Adrian Riekert
53
1
0
03 Mar 2025
Approximation results for Gradient Descent trained Shallow Neural
  Networks in $1d$
Approximation results for Gradient Descent trained Shallow Neural Networks in 1d1d1d
R. Gentile
G. Welper
ODL
25
6
0
17 Sep 2022
A Simple Convergence Proof of Adam and Adagrad
A Simple Convergence Proof of Adam and Adagrad
Alexandre Défossez
Léon Bottou
Francis R. Bach
Nicolas Usunier
56
143
0
05 Mar 2020
Linear Convergence of Gradient and Proximal-Gradient Methods Under the
  Polyak-Łojasiewicz Condition
Linear Convergence of Gradient and Proximal-Gradient Methods Under the Polyak-Łojasiewicz Condition
Hamed Karimi
J. Nutini
Mark W. Schmidt
119
1,194
0
16 Aug 2016
1