ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2102.11840
  4. Cited By
Convergence rates for gradient descent in the training of
  overparameterized artificial neural networks with biases

Convergence rates for gradient descent in the training of overparameterized artificial neural networks with biases

23 February 2021
Arnulf Jentzen
T. Kröger
    ODL
ArXivPDFHTML

Papers citing "Convergence rates for gradient descent in the training of overparameterized artificial neural networks with biases"

2 / 2 papers shown
Title
Non-convergence to global minimizers for Adam and stochastic gradient
  descent optimization and constructions of local minimizers in the training of
  artificial neural networks
Non-convergence to global minimizers for Adam and stochastic gradient descent optimization and constructions of local minimizers in the training of artificial neural networks
Arnulf Jentzen
Adrian Riekert
25
4
0
07 Feb 2024
Convergence proof for stochastic gradient descent in the training of
  deep neural networks with ReLU activation for constant target functions
Convergence proof for stochastic gradient descent in the training of deep neural networks with ReLU activation for constant target functions
Martin Hutzenthaler
Arnulf Jentzen
Katharina Pohl
Adrian Riekert
Luca Scarpa
MLT
32
6
0
13 Dec 2021
1