Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2005.10807
Cited By
Kolmogorov Width Decay and Poor Approximators in Machine Learning: Shallow Neural Networks, Random Feature Models and Neural Tangent Kernels
21 May 2020
E. Weinan
Stephan Wojtowytsch
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Kolmogorov Width Decay and Poor Approximators in Machine Learning: Shallow Neural Networks, Random Feature Models and Neural Tangent Kernels"
9 / 9 papers shown
Title
Learning with Norm Constrained, Over-parameterized, Two-layer Neural Networks
Fanghui Liu
L. Dadi
V. Cevher
74
2
0
29 Apr 2024
Embeddings between Barron spaces with higher order activation functions
T. J. Heeringa
L. Spek
Felix L. Schwenninger
C. Brune
21
3
0
25 May 2023
Reinforcement Learning with Function Approximation: From Linear to Nonlinear
Jihao Long
Jiequn Han
19
5
0
20 Feb 2023
Duality for Neural Networks through Reproducing Kernel Banach Spaces
L. Spek
T. J. Heeringa
Felix L. Schwenninger
C. Brune
11
13
0
09 Nov 2022
Lagrangian PINNs: A causality-conforming solution to failure modes of physics-informed neural networks
R. Mojgani
Maciej Balajewicz
P. Hassanzadeh
PINN
25
45
0
05 May 2022
Qualitative neural network approximation over R and C: Elementary proofs for analytic and polynomial activation
Josiah Park
Stephan Wojtowytsch
15
1
0
25 Mar 2022
Generalization Error of GAN from the Discriminator's Perspective
Hongkang Yang
Weinan E
GAN
38
13
0
08 Jul 2021
Representation formulas and pointwise properties for Barron functions
E. Weinan
Stephan Wojtowytsch
20
79
0
10 Jun 2020
Can Shallow Neural Networks Beat the Curse of Dimensionality? A mean field training perspective
Stephan Wojtowytsch
E. Weinan
MLT
21
48
0
21 May 2020
1