ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1608.03287
  4. Cited By
Deep vs. shallow networks : An approximation theory perspective

Deep vs. shallow networks : An approximation theory perspective

10 August 2016
H. Mhaskar
T. Poggio
ArXivPDFHTML

Papers citing "Deep vs. shallow networks : An approximation theory perspective"

13 / 63 papers shown
Title
A proof that deep artificial neural networks overcome the curse of
  dimensionality in the numerical approximation of Kolmogorov partial
  differential equations with constant diffusion and nonlinear drift
  coefficients
A proof that deep artificial neural networks overcome the curse of dimensionality in the numerical approximation of Kolmogorov partial differential equations with constant diffusion and nonlinear drift coefficients
Arnulf Jentzen
Diyora Salimova
Timo Welti
AI4CE
29
116
0
19 Sep 2018
A proof that artificial neural networks overcome the curse of
  dimensionality in the numerical approximation of Black-Scholes partial
  differential equations
A proof that artificial neural networks overcome the curse of dimensionality in the numerical approximation of Black-Scholes partial differential equations
Philipp Grohs
F. Hornung
Arnulf Jentzen
Philippe von Wurstemberger
21
167
0
07 Sep 2018
Universal Approximation with Quadratic Deep Networks
Universal Approximation with Quadratic Deep Networks
Fenglei Fan
Jinjun Xiong
Ge Wang
PINN
36
78
0
31 Jul 2018
ResNet with one-neuron hidden layers is a Universal Approximator
ResNet with one-neuron hidden layers is a Universal Approximator
Hongzhou Lin
Stefanie Jegelka
43
227
0
28 Jun 2018
Butterfly-Net: Optimal Function Representation Based on Convolutional
  Neural Networks
Butterfly-Net: Optimal Function Representation Based on Convolutional Neural Networks
Yingzhou Li
Xiuyuan Cheng
Jianfeng Lu
21
23
0
18 May 2018
The Role of Information Complexity and Randomization in Representation
  Learning
The Role of Information Complexity and Randomization in Representation Learning
Matías Vera
Pablo Piantanida
L. Rey Vega
45
14
0
14 Feb 2018
An efficient quantum algorithm for generative machine learning
An efficient quantum algorithm for generative machine learning
Xun Gao
Zhengyu Zhang
L. Duan
29
25
0
06 Nov 2017
Approximating Continuous Functions by ReLU Nets of Minimal Width
Approximating Continuous Functions by ReLU Nets of Minimal Width
Boris Hanin
Mark Sellke
50
229
0
31 Oct 2017
Tensor Networks for Dimensionality Reduction and Large-Scale
  Optimizations. Part 2 Applications and Future Perspectives
Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives
A. Cichocki
Anh-Huy Phan
Qibin Zhao
Namgil Lee
Ivan Oseledets
Masashi Sugiyama
Danilo P. Mandic
34
296
0
30 Aug 2017
On the approximation by single hidden layer feedforward neural networks
  with fixed weights
On the approximation by single hidden layer feedforward neural networks with fixed weights
Namig J. Guliyev
V. Ismailov
MLT
11
114
0
21 Aug 2017
Optimal Approximation with Sparsely Connected Deep Neural Networks
Optimal Approximation with Sparsely Connected Deep Neural Networks
Helmut Bölcskei
Philipp Grohs
Gitta Kutyniok
P. Petersen
41
255
0
04 May 2017
Error bounds for approximations with deep ReLU networks
Error bounds for approximations with deep ReLU networks
Dmitry Yarotsky
42
1,223
0
03 Oct 2016
Why does deep and cheap learning work so well?
Why does deep and cheap learning work so well?
Henry W. Lin
Max Tegmark
David Rolnick
40
603
0
29 Aug 2016
Previous
12