Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1710.11278
Cited By
Approximating Continuous Functions by ReLU Nets of Minimal Width
31 October 2017
Boris Hanin
Mark Sellke
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Approximating Continuous Functions by ReLU Nets of Minimal Width"
8 / 58 papers shown
Title
ResNet with one-neuron hidden layers is a Universal Approximator
Hongzhou Lin
Stefanie Jegelka
43
227
0
28 Jun 2018
Learning One-hidden-layer ReLU Networks via Gradient Descent
Xiao Zhang
Yaodong Yu
Lingxiao Wang
Quanquan Gu
MLT
30
134
0
20 Jun 2018
On Tighter Generalization Bound for Deep Neural Networks: CNNs, ResNets, and Beyond
Xingguo Li
Junwei Lu
Zhaoran Wang
Jarvis Haupt
T. Zhao
27
78
0
13 Jun 2018
Mad Max: Affine Spline Insights into Deep Learning
Randall Balestriero
Richard Baraniuk
AI4CE
31
78
0
17 May 2018
Optimal approximation of continuous functions by very deep ReLU networks
Dmitry Yarotsky
27
294
0
10 Feb 2018
The power of deeper networks for expressing natural functions
David Rolnick
Max Tegmark
36
174
0
16 May 2017
Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation
Yonghui Wu
M. Schuster
Z. Chen
Quoc V. Le
Mohammad Norouzi
...
Alex Rudnick
Oriol Vinyals
G. Corrado
Macduff Hughes
J. Dean
AIMat
718
6,748
0
26 Sep 2016
Benefits of depth in neural networks
Matus Telgarsky
153
603
0
14 Feb 2016
Previous
1
2