ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2208.02083
  4. Cited By
Gradient descent provably escapes saddle points in the training of
  shallow ReLU networks

Gradient descent provably escapes saddle points in the training of shallow ReLU networks

3 August 2022
Patrick Cheridito
Arnulf Jentzen
Florian Rossmannek
ArXivPDFHTML

Papers citing "Gradient descent provably escapes saddle points in the training of shallow ReLU networks"

2 / 2 papers shown
Title
Global Convergence of SGD On Two Layer Neural Nets
Global Convergence of SGD On Two Layer Neural Nets
Pulkit Gopalani
Anirbit Mukherjee
18
5
0
20 Oct 2022
The Loss Surfaces of Multilayer Networks
The Loss Surfaces of Multilayer Networks
A. Choromańska
Mikael Henaff
Michaël Mathieu
Gerard Ben Arous
Yann LeCun
ODL
175
1,184
0
30 Nov 2014
1