ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2104.00277
  4. Cited By
A proof of convergence for stochastic gradient descent in the training
  of artificial neural networks with ReLU activation for constant target
  functions

A proof of convergence for stochastic gradient descent in the training of artificial neural networks with ReLU activation for constant target functions

1 April 2021
Arnulf Jentzen
Adrian Riekert
    MLT
ArXivPDFHTML

Papers citing "A proof of convergence for stochastic gradient descent in the training of artificial neural networks with ReLU activation for constant target functions"

7 / 7 papers shown
Title
Non-convergence to the optimal risk for Adam and stochastic gradient descent optimization in the training of deep neural networks
Thang Do
Arnulf Jentzen
Adrian Riekert
56
1
0
03 Mar 2025
Operator theory, kernels, and Feedforward Neural Networks
Operator theory, kernels, and Feedforward Neural Networks
P. Jorgensen
Myung-Sin Song
James Tian
30
0
0
03 Jan 2023
Identical Image Retrieval using Deep Learning
Identical Image Retrieval using Deep Learning
Sayan Nath
Nikhil Nayak
VLM
24
1
0
10 May 2022
Convergence proof for stochastic gradient descent in the training of
  deep neural networks with ReLU activation for constant target functions
Convergence proof for stochastic gradient descent in the training of deep neural networks with ReLU activation for constant target functions
Martin Hutzenthaler
Arnulf Jentzen
Katharina Pohl
Adrian Riekert
Luca Scarpa
MLT
32
6
0
13 Dec 2021
Existence, uniqueness, and convergence rates for gradient flows in the
  training of artificial neural networks with ReLU activation
Existence, uniqueness, and convergence rates for gradient flows in the training of artificial neural networks with ReLU activation
Simon Eberle
Arnulf Jentzen
Adrian Riekert
G. Weiss
31
12
0
18 Aug 2021
A proof of convergence for the gradient descent optimization method with
  random initializations in the training of neural networks with ReLU
  activation for piecewise linear target functions
A proof of convergence for the gradient descent optimization method with random initializations in the training of neural networks with ReLU activation for piecewise linear target functions
Arnulf Jentzen
Adrian Riekert
25
13
0
10 Aug 2021
Convergence analysis for gradient flows in the training of artificial
  neural networks with ReLU activation
Convergence analysis for gradient flows in the training of artificial neural networks with ReLU activation
Arnulf Jentzen
Adrian Riekert
19
23
0
09 Jul 2021
1