ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1905.09803
  4. Cited By
How degenerate is the parametrization of neural networks with the ReLU
  activation function?

How degenerate is the parametrization of neural networks with the ReLU activation function?

23 May 2019
Julius Berner
Dennis Elbrächter
Philipp Grohs
    ODL
ArXivPDFHTML

Papers citing "How degenerate is the parametrization of neural networks with the ReLU activation function?"

10 / 10 papers shown
Title
Computability of Optimizers
Computability of Optimizers
Yunseok Lee
Holger Boche
Gitta Kutyniok
27
16
0
15 Jan 2023
Local Identifiability of Deep ReLU Neural Networks: the Theory
Local Identifiability of Deep ReLU Neural Networks: the Theory
Joachim Bona-Pellissier
Franccois Malgouyres
F. Bachoc
FAtt
67
6
0
15 Jun 2022
Learning ReLU networks to high uniform accuracy is intractable
Learning ReLU networks to high uniform accuracy is intractable
Julius Berner
Philipp Grohs
F. Voigtlaender
32
4
0
26 May 2022
Parameter identifiability of a deep feedforward ReLU neural network
Parameter identifiability of a deep feedforward ReLU neural network
Joachim Bona-Pellissier
François Bachoc
François Malgouyres
41
14
0
24 Dec 2021
Non-convergence of stochastic gradient descent in the training of deep
  neural networks
Non-convergence of stochastic gradient descent in the training of deep neural networks
Patrick Cheridito
Arnulf Jentzen
Florian Rossmannek
14
37
0
12 Jun 2020
Universal Approximation with Deep Narrow Networks
Universal Approximation with Deep Narrow Networks
Patrick Kidger
Terry Lyons
27
323
0
21 May 2019
Deep Neural Network Approximation Theory
Deep Neural Network Approximation Theory
Dennis Elbrächter
Dmytro Perekrestenko
Philipp Grohs
Helmut Bölcskei
14
207
0
08 Jan 2019
The Loss Surfaces of Multilayer Networks
The Loss Surfaces of Multilayer Networks
A. Choromańska
Mikael Henaff
Michaël Mathieu
Gerard Ben Arous
Yann LeCun
ODL
179
1,185
0
30 Nov 2014
Stochastic Gradient Descent for Non-smooth Optimization: Convergence
  Results and Optimal Averaging Schemes
Stochastic Gradient Descent for Non-smooth Optimization: Convergence Results and Optimal Averaging Schemes
Ohad Shamir
Tong Zhang
101
570
0
08 Dec 2012
Improving neural networks by preventing co-adaptation of feature
  detectors
Improving neural networks by preventing co-adaptation of feature detectors
Geoffrey E. Hinton
Nitish Srivastava
A. Krizhevsky
Ilya Sutskever
Ruslan Salakhutdinov
VLM
266
7,634
0
03 Jul 2012
1