ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2108.04620
  4. Cited By
A proof of convergence for the gradient descent optimization method with
  random initializations in the training of neural networks with ReLU
  activation for piecewise linear target functions

A proof of convergence for the gradient descent optimization method with random initializations in the training of neural networks with ReLU activation for piecewise linear target functions

10 August 2021
Arnulf Jentzen
Adrian Riekert
ArXivPDFHTML

Papers citing "A proof of convergence for the gradient descent optimization method with random initializations in the training of neural networks with ReLU activation for piecewise linear target functions"

3 / 3 papers shown
Title
Non-convergence to global minimizers for Adam and stochastic gradient
  descent optimization and constructions of local minimizers in the training of
  artificial neural networks
Non-convergence to global minimizers for Adam and stochastic gradient descent optimization and constructions of local minimizers in the training of artificial neural networks
Arnulf Jentzen
Adrian Riekert
15
4
0
07 Feb 2024
Convergence proof for stochastic gradient descent in the training of
  deep neural networks with ReLU activation for constant target functions
Convergence proof for stochastic gradient descent in the training of deep neural networks with ReLU activation for constant target functions
Martin Hutzenthaler
Arnulf Jentzen
Katharina Pohl
Adrian Riekert
Luca Scarpa
MLT
21
6
0
13 Dec 2021
Linear Convergence of Gradient and Proximal-Gradient Methods Under the
  Polyak-Łojasiewicz Condition
Linear Convergence of Gradient and Proximal-Gradient Methods Under the Polyak-Łojasiewicz Condition
Hamed Karimi
J. Nutini
Mark W. Schmidt
119
1,194
0
16 Aug 2016
1