Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1905.09803
Cited By
How degenerate is the parametrization of neural networks with the ReLU activation function?
23 May 2019
Julius Berner
Dennis Elbrächter
Philipp Grohs
ODL
Re-assign community
ArXiv
PDF
HTML
Papers citing
"How degenerate is the parametrization of neural networks with the ReLU activation function?"
10 / 10 papers shown
Title
Computability of Optimizers
Yunseok Lee
Holger Boche
Gitta Kutyniok
29
16
0
15 Jan 2023
Local Identifiability of Deep ReLU Neural Networks: the Theory
Joachim Bona-Pellissier
Franccois Malgouyres
F. Bachoc
FAtt
67
6
0
15 Jun 2022
Learning ReLU networks to high uniform accuracy is intractable
Julius Berner
Philipp Grohs
F. Voigtlaender
32
4
0
26 May 2022
Parameter identifiability of a deep feedforward ReLU neural network
Joachim Bona-Pellissier
François Bachoc
François Malgouyres
41
14
0
24 Dec 2021
Non-convergence of stochastic gradient descent in the training of deep neural networks
Patrick Cheridito
Arnulf Jentzen
Florian Rossmannek
14
37
0
12 Jun 2020
Universal Approximation with Deep Narrow Networks
Patrick Kidger
Terry Lyons
29
324
0
21 May 2019
Deep Neural Network Approximation Theory
Dennis Elbrächter
Dmytro Perekrestenko
Philipp Grohs
Helmut Bölcskei
14
207
0
08 Jan 2019
The Loss Surfaces of Multilayer Networks
A. Choromańska
Mikael Henaff
Michaël Mathieu
Gerard Ben Arous
Yann LeCun
ODL
179
1,185
0
30 Nov 2014
Stochastic Gradient Descent for Non-smooth Optimization: Convergence Results and Optimal Averaging Schemes
Ohad Shamir
Tong Zhang
101
570
0
08 Dec 2012
Improving neural networks by preventing co-adaptation of feature detectors
Geoffrey E. Hinton
Nitish Srivastava
A. Krizhevsky
Ilya Sutskever
Ruslan Salakhutdinov
VLM
266
7,634
0
03 Jul 2012
1