ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1709.02540
  4. Cited By
The Expressive Power of Neural Networks: A View from the Width

The Expressive Power of Neural Networks: A View from the Width

8 September 2017
Zhou Lu
Hongming Pu
Feicheng Wang
Zhiqiang Hu
Liwei Wang
ArXivPDFHTML

Papers citing "The Expressive Power of Neural Networks: A View from the Width"

38 / 138 papers shown
Title
Active Training of Physics-Informed Neural Networks to Aggregate and
  Interpolate Parametric Solutions to the Navier-Stokes Equations
Active Training of Physics-Informed Neural Networks to Aggregate and Interpolate Parametric Solutions to the Navier-Stokes Equations
Christopher J. Arthurs
A. King
PINN
43
51
0
02 May 2020
On Deep Instrumental Variables Estimate
On Deep Instrumental Variables Estimate
Ruiqi Liu
Zuofeng Shang
Guang Cheng
26
26
0
30 Apr 2020
It's Not What Machines Can Learn, It's What We Cannot Teach
It's Not What Machines Can Learn, It's What We Cannot Teach
Gal Yehuda
Moshe Gabel
Assaf Schuster
FaML
14
37
0
21 Feb 2020
A closer look at the approximation capabilities of neural networks
A closer look at the approximation capabilities of neural networks
Kai Fong Ernest Chong
21
16
0
16 Feb 2020
A Limited-Capacity Minimax Theorem for Non-Convex Games or: How I
  Learned to Stop Worrying about Mixed-Nash and Love Neural Nets
A Limited-Capacity Minimax Theorem for Non-Convex Games or: How I Learned to Stop Worrying about Mixed-Nash and Love Neural Nets
Gauthier Gidel
David Balduzzi
Wojciech M. Czarnecki
M. Garnelo
Yoram Bachrach
15
7
0
14 Feb 2020
Invariant Risk Minimization Games
Invariant Risk Minimization Games
Kartik Ahuja
Karthikeyan Shanmugam
Kush R. Varshney
Amit Dhurandhar
OOD
33
244
0
11 Feb 2020
Deep Network Approximation for Smooth Functions
Deep Network Approximation for Smooth Functions
Jianfeng Lu
Zuowei Shen
Haizhao Yang
Shijun Zhang
67
247
0
09 Jan 2020
Sparse Weight Activation Training
Sparse Weight Activation Training
Md Aamir Raihan
Tor M. Aamodt
34
73
0
07 Jan 2020
Deep Learning via Dynamical Systems: An Approximation Perspective
Deep Learning via Dynamical Systems: An Approximation Perspective
Qianxiao Li
Ting Lin
Zuowei Shen
AI4TS
AI4CE
25
107
0
22 Dec 2019
Are Transformers universal approximators of sequence-to-sequence
  functions?
Are Transformers universal approximators of sequence-to-sequence functions?
Chulhee Yun
Srinadh Bhojanapalli
A. S. Rawat
Sashank J. Reddi
Sanjiv Kumar
14
335
0
20 Dec 2019
Deep Learning-based Limited Feedback Designs for MIMO Systems
Deep Learning-based Limited Feedback Designs for MIMO Systems
Jeonghyeon Jang
Hoon Lee
S. Hwang
Haibao Ren
Inkyu Lee
AI4CE
30
32
0
19 Dec 2019
Analysis of Deep Neural Networks with Quasi-optimal polynomial
  approximation rates
Analysis of Deep Neural Networks with Quasi-optimal polynomial approximation rates
Joseph Daws
Clayton Webster
30
8
0
04 Dec 2019
Neural Contextual Bandits with UCB-based Exploration
Neural Contextual Bandits with UCB-based Exploration
Dongruo Zhou
Lihong Li
Quanquan Gu
36
15
0
11 Nov 2019
Stochastic Feedforward Neural Networks: Universal Approximation
Stochastic Feedforward Neural Networks: Universal Approximation
Thomas Merkh
Guido Montúfar
17
8
0
22 Oct 2019
DirectPET: Full Size Neural Network PET Reconstruction from Sinogram
  Data
DirectPET: Full Size Neural Network PET Reconstruction from Sinogram Data
W. Whiteley
W. K. Luk
J. Gregor
3DV
AI4TS
29
54
0
19 Aug 2019
Padé Activation Units: End-to-end Learning of Flexible Activation
  Functions in Deep Networks
Padé Activation Units: End-to-end Learning of Flexible Activation Functions in Deep Networks
Alejandro Molina
P. Schramowski
Kristian Kersting
ODL
23
78
0
15 Jul 2019
Densely Connected Search Space for More Flexible Neural Architecture
  Search
Densely Connected Search Space for More Flexible Neural Architecture Search
Jiemin Fang
Yuzhu Sun
Qian Zhang
Yuan Li
Wenyu Liu
Xinggang Wang
21
122
0
23 Jun 2019
A Review on Deep Learning in Medical Image Reconstruction
A Review on Deep Learning in Medical Image Reconstruction
Hai-Miao Zhang
Bin Dong
MedIm
35
122
0
23 Jun 2019
The phase diagram of approximation rates for deep neural networks
The phase diagram of approximation rates for deep neural networks
Dmitry Yarotsky
Anton Zhevnerchuk
30
121
0
22 Jun 2019
Deep Network Approximation Characterized by Number of Neurons
Deep Network Approximation Characterized by Number of Neurons
Zuowei Shen
Haizhao Yang
Shijun Zhang
23
182
0
13 Jun 2019
EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks
EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks
Mingxing Tan
Quoc V. Le
3DV
MedIm
11
17,761
0
28 May 2019
Universal Approximation with Deep Narrow Networks
Universal Approximation with Deep Narrow Networks
Patrick Kidger
Terry Lyons
40
328
0
21 May 2019
Nonlinear Approximation via Compositions
Nonlinear Approximation via Compositions
Zuowei Shen
Haizhao Yang
Shijun Zhang
26
92
0
26 Feb 2019
A Survey of the Recent Architectures of Deep Convolutional Neural
  Networks
A Survey of the Recent Architectures of Deep Convolutional Neural Networks
Asifullah Khan
A. Sohail
Umme Zahoora
Aqsa Saeed Qureshi
OOD
65
2,271
0
17 Jan 2019
Enhanced Expressive Power and Fast Training of Neural Networks by Random
  Projections
Enhanced Expressive Power and Fast Training of Neural Networks by Random Projections
Jian-Feng Cai
Dong Li
Jiaze Sun
Ke Wang
30
5
0
22 Nov 2018
On a Sparse Shortcut Topology of Artificial Neural Networks
On a Sparse Shortcut Topology of Artificial Neural Networks
Fenglei Fan
Dayang Wang
Hengtao Guo
Qikui Zhu
Pingkun Yan
Ge Wang
Hengyong Yu
38
22
0
22 Nov 2018
Stochastic Gradient Descent Optimizes Over-parameterized Deep ReLU
  Networks
Stochastic Gradient Descent Optimizes Over-parameterized Deep ReLU Networks
Difan Zou
Yuan Cao
Dongruo Zhou
Quanquan Gu
ODL
33
446
0
21 Nov 2018
Gradient Descent Finds Global Minima of Deep Neural Networks
Gradient Descent Finds Global Minima of Deep Neural Networks
S. Du
J. Lee
Haochuan Li
Liwei Wang
Masayoshi Tomizuka
ODL
44
1,125
0
09 Nov 2018
Small ReLU networks are powerful memorizers: a tight analysis of
  memorization capacity
Small ReLU networks are powerful memorizers: a tight analysis of memorization capacity
Chulhee Yun
S. Sra
Ali Jadbabaie
28
117
0
17 Oct 2018
Universal Approximation with Quadratic Deep Networks
Universal Approximation with Quadratic Deep Networks
Fenglei Fan
Jinjun Xiong
Ge Wang
PINN
36
78
0
31 Jul 2018
ResNet with one-neuron hidden layers is a Universal Approximator
ResNet with one-neuron hidden layers is a Universal Approximator
Hongzhou Lin
Stefanie Jegelka
43
227
0
28 Jun 2018
On the Spectral Bias of Neural Networks
On the Spectral Bias of Neural Networks
Nasim Rahaman
A. Baratin
Devansh Arpit
Felix Dräxler
Min Lin
Fred Hamprecht
Yoshua Bengio
Aaron Courville
57
1,395
0
22 Jun 2018
Learning One-hidden-layer ReLU Networks via Gradient Descent
Learning One-hidden-layer ReLU Networks via Gradient Descent
Xiao Zhang
Yaodong Yu
Lingxiao Wang
Quanquan Gu
MLT
30
134
0
20 Jun 2018
The Effect of Network Width on the Performance of Large-batch Training
The Effect of Network Width on the Performance of Large-batch Training
Lingjiao Chen
Hongyi Wang
Jinman Zhao
Dimitris Papailiopoulos
Paraschos Koutris
21
22
0
11 Jun 2018
Understanding Generalization and Optimization Performance of Deep CNNs
Understanding Generalization and Optimization Performance of Deep CNNs
Pan Zhou
Jiashi Feng
MLT
30
48
0
28 May 2018
Optimal approximation of continuous functions by very deep ReLU networks
Optimal approximation of continuous functions by very deep ReLU networks
Dmitry Yarotsky
27
294
0
10 Feb 2018
The power of deeper networks for expressing natural functions
The power of deeper networks for expressing natural functions
David Rolnick
Max Tegmark
36
174
0
16 May 2017
Benefits of depth in neural networks
Benefits of depth in neural networks
Matus Telgarsky
153
603
0
14 Feb 2016
Previous
123