ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2103.10922
  4. Cited By
Landscape analysis for shallow neural networks: complete classification
  of critical points for affine target functions

Landscape analysis for shallow neural networks: complete classification of critical points for affine target functions

19 March 2021
Patrick Cheridito
Arnulf Jentzen
Florian Rossmannek
ArXivPDFHTML

Papers citing "Landscape analysis for shallow neural networks: complete classification of critical points for affine target functions"

10 / 10 papers shown
Title
Loss Landscape of Shallow ReLU-like Neural Networks: Stationary Points, Saddle Escape, and Network Embedding
Loss Landscape of Shallow ReLU-like Neural Networks: Stationary Points, Saddle Escape, and Network Embedding
Zhengqing Wu
Berfin Simsek
Francois Ged
ODL
38
0
0
08 Feb 2024
On the existence of minimizers in shallow residual ReLU neural network
  optimization landscapes
On the existence of minimizers in shallow residual ReLU neural network optimization landscapes
Steffen Dereich
Arnulf Jentzen
Sebastian Kassing
21
6
0
28 Feb 2023
Gradient descent provably escapes saddle points in the training of
  shallow ReLU networks
Gradient descent provably escapes saddle points in the training of shallow ReLU networks
Patrick Cheridito
Arnulf Jentzen
Florian Rossmannek
28
5
0
03 Aug 2022
On the Omnipresence of Spurious Local Minima in Certain Neural Network
  Training Problems
On the Omnipresence of Spurious Local Minima in Certain Neural Network Training Problems
C. Christof
Julia Kowalczyk
16
8
0
23 Feb 2022
Convergence proof for stochastic gradient descent in the training of
  deep neural networks with ReLU activation for constant target functions
Convergence proof for stochastic gradient descent in the training of deep neural networks with ReLU activation for constant target functions
Martin Hutzenthaler
Arnulf Jentzen
Katharina Pohl
Adrian Riekert
Luca Scarpa
MLT
32
6
0
13 Dec 2021
Existence, uniqueness, and convergence rates for gradient flows in the
  training of artificial neural networks with ReLU activation
Existence, uniqueness, and convergence rates for gradient flows in the training of artificial neural networks with ReLU activation
Simon Eberle
Arnulf Jentzen
Adrian Riekert
G. Weiss
31
12
0
18 Aug 2021
A proof of convergence for the gradient descent optimization method with
  random initializations in the training of neural networks with ReLU
  activation for piecewise linear target functions
A proof of convergence for the gradient descent optimization method with random initializations in the training of neural networks with ReLU activation for piecewise linear target functions
Arnulf Jentzen
Adrian Riekert
25
13
0
10 Aug 2021
Convergence analysis for gradient flows in the training of artificial
  neural networks with ReLU activation
Convergence analysis for gradient flows in the training of artificial neural networks with ReLU activation
Arnulf Jentzen
Adrian Riekert
19
23
0
09 Jul 2021
A proof of convergence for stochastic gradient descent in the training
  of artificial neural networks with ReLU activation for constant target
  functions
A proof of convergence for stochastic gradient descent in the training of artificial neural networks with ReLU activation for constant target functions
Arnulf Jentzen
Adrian Riekert
MLT
32
13
0
01 Apr 2021
The Loss Surfaces of Multilayer Networks
The Loss Surfaces of Multilayer Networks
A. Choromańska
Mikael Henaff
Michaël Mathieu
Gerard Ben Arous
Yann LeCun
ODL
177
1,185
0
30 Nov 2014
1