ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2104.00277
  4. Cited By
A proof of convergence for stochastic gradient descent in the training
  of artificial neural networks with ReLU activation for constant target
  functions

A proof of convergence for stochastic gradient descent in the training of artificial neural networks with ReLU activation for constant target functions

1 April 2021
Arnulf Jentzen
Adrian Riekert
    MLT
ArXivPDFHTML

Papers citing "A proof of convergence for stochastic gradient descent in the training of artificial neural networks with ReLU activation for constant target functions"

4 / 4 papers shown
Title
Operator theory, kernels, and Feedforward Neural Networks
Operator theory, kernels, and Feedforward Neural Networks
P. Jorgensen
Myung-Sin Song
James Tian
25
0
0
03 Jan 2023
Identical Image Retrieval using Deep Learning
Identical Image Retrieval using Deep Learning
Sayan Nath
Nikhil Nayak
VLM
24
1
0
10 May 2022
Convergence proof for stochastic gradient descent in the training of
  deep neural networks with ReLU activation for constant target functions
Convergence proof for stochastic gradient descent in the training of deep neural networks with ReLU activation for constant target functions
Martin Hutzenthaler
Arnulf Jentzen
Katharina Pohl
Adrian Riekert
Luca Scarpa
MLT
32
6
0
13 Dec 2021
Convergence analysis for gradient flows in the training of artificial
  neural networks with ReLU activation
Convergence analysis for gradient flows in the training of artificial neural networks with ReLU activation
Arnulf Jentzen
Adrian Riekert
19
23
0
09 Jul 2021
1