ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2303.03950
  4. Cited By
On the existence of optimal shallow feedforward networks with ReLU
  activation

On the existence of optimal shallow feedforward networks with ReLU activation

6 March 2023
Steffen Dereich
Sebastian Kassing
ArXivPDFHTML

Papers citing "On the existence of optimal shallow feedforward networks with ReLU activation"

2 / 2 papers shown
Title
Non-convergence to global minimizers for Adam and stochastic gradient
  descent optimization and constructions of local minimizers in the training of
  artificial neural networks
Non-convergence to global minimizers for Adam and stochastic gradient descent optimization and constructions of local minimizers in the training of artificial neural networks
Arnulf Jentzen
Adrian Riekert
35
4
0
07 Feb 2024
Convergence of stochastic gradient descent schemes for
  Lojasiewicz-landscapes
Convergence of stochastic gradient descent schemes for Lojasiewicz-landscapes
Steffen Dereich
Sebastian Kassing
26
27
0
16 Feb 2021
1