Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2112.07369
Cited By
Convergence proof for stochastic gradient descent in the training of deep neural networks with ReLU activation for constant target functions
13 December 2021
Martin Hutzenthaler
Arnulf Jentzen
Katharina Pohl
Adrian Riekert
Luca Scarpa
MLT
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Convergence proof for stochastic gradient descent in the training of deep neural networks with ReLU activation for constant target functions"
6 / 6 papers shown
Title
Non-convergence to the optimal risk for Adam and stochastic gradient descent optimization in the training of deep neural networks
Thang Do
Arnulf Jentzen
Adrian Riekert
53
1
0
03 Mar 2025
Non-convergence to global minimizers for Adam and stochastic gradient descent optimization and constructions of local minimizers in the training of artificial neural networks
Arnulf Jentzen
Adrian Riekert
25
4
0
07 Feb 2024
Global Convergence of SGD On Two Layer Neural Nets
Pulkit Gopalani
Anirbit Mukherjee
18
5
0
20 Oct 2022
Normalized gradient flow optimization in the training of ReLU artificial neural networks
Simon Eberle
Arnulf Jentzen
Adrian Riekert
G. Weiss
24
0
0
13 Jul 2022
On bounds for norms of reparameterized ReLU artificial neural network parameters: sums of fractional powers of the Lipschitz norm control the network parameter vector
Arnulf Jentzen
T. Kröger
22
0
0
27 Jun 2022
Linear Convergence of Gradient and Proximal-Gradient Methods Under the Polyak-Łojasiewicz Condition
Hamed Karimi
J. Nutini
Mark W. Schmidt
119
1,198
0
16 Aug 2016
1