ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1906.09477
  4. Cited By
The phase diagram of approximation rates for deep neural networks

The phase diagram of approximation rates for deep neural networks

22 June 2019
Dmitry Yarotsky
Anton Zhevnerchuk
ArXivPDFHTML

Papers citing "The phase diagram of approximation rates for deep neural networks"

35 / 35 papers shown
Title
Statistically guided deep learning
Statistically guided deep learning
Michael Kohler
A. Krzyżak
ODL
BDL
79
0
0
11 Apr 2025
Approximation properties of neural ODEs
Approximation properties of neural ODEs
Arturo De Marinis
Davide Murari
E. Celledoni
Nicola Guglielmi
B. Owren
Francesco Tudisco
52
1
0
19 Mar 2025
Deep Kalman Filters Can Filter
Deep Kalman Filters Can Filter
Blanka Hovart
Anastasis Kratsios
Yannick Limmer
Xuwei Yang
53
1
0
31 Dec 2024
On the expressiveness and spectral bias of KANs
On the expressiveness and spectral bias of KANs
Yixuan Wang
Jonathan W. Siegel
Ziming Liu
Thomas Y. Hou
40
10
0
02 Oct 2024
On the optimal approximation of Sobolev and Besov functions using deep
  ReLU neural networks
On the optimal approximation of Sobolev and Besov functions using deep ReLU neural networks
Yunfei Yang
62
2
0
02 Sep 2024
Approximation Rates and VC-Dimension Bounds for (P)ReLU MLP Mixture of
  Experts
Approximation Rates and VC-Dimension Bounds for (P)ReLU MLP Mixture of Experts
Anastasis Kratsios
Haitz Sáez de Ocáriz Borde
Takashi Furuya
Marc T. Law
MoE
41
1
0
05 Feb 2024
Universal Consistency of Wide and Deep ReLU Neural Networks and Minimax
  Optimal Convergence Rates for Kolmogorov-Donoho Optimal Function Classes
Universal Consistency of Wide and Deep ReLU Neural Networks and Minimax Optimal Convergence Rates for Kolmogorov-Donoho Optimal Function Classes
Hyunouk Ko
Xiaoming Huo
42
1
0
08 Jan 2024
Deep Learning and Computational Physics (Lecture Notes)
Deep Learning and Computational Physics (Lecture Notes)
Deep Ray
Orazio Pinti
Assad A. Oberai
PINN
AI4CE
27
7
0
03 Jan 2023
Instance-Dependent Generalization Bounds via Optimal Transport
Instance-Dependent Generalization Bounds via Optimal Transport
Songyan Hou
Parnian Kassraie
Anastasis Kratsios
Andreas Krause
Jonas Rothfuss
22
6
0
02 Nov 2022
Analysis of the rate of convergence of an over-parametrized deep neural
  network estimate learned by gradient descent
Analysis of the rate of convergence of an over-parametrized deep neural network estimate learned by gradient descent
Michael Kohler
A. Krzyżak
32
10
0
04 Oct 2022
Approximation results for Gradient Descent trained Shallow Neural
  Networks in $1d$
Approximation results for Gradient Descent trained Shallow Neural Networks in 1d1d1d
R. Gentile
G. Welper
ODL
56
6
0
17 Sep 2022
On the universal consistency of an over-parametrized deep neural network
  estimate learned by gradient descent
On the universal consistency of an over-parametrized deep neural network estimate learned by gradient descent
Selina Drews
Michael Kohler
30
13
0
30 Aug 2022
The BUTTER Zone: An Empirical Study of Training Dynamics in Fully
  Connected Neural Networks
The BUTTER Zone: An Empirical Study of Training Dynamics in Fully Connected Neural Networks
Charles Edison Tripp
J. Perr-Sauer
L. Hayne
M. Lunacek
Jamil Gafur
AI4CE
28
0
0
25 Jul 2022
A general approximation lower bound in $L^p$ norm, with applications to
  feed-forward neural networks
A general approximation lower bound in LpL^pLp norm, with applications to feed-forward neural networks
El Mehdi Achour
Armand Foucault
Sébastien Gerchinovitz
Franccois Malgouyres
32
7
0
09 Jun 2022
Qualitative neural network approximation over R and C: Elementary proofs
  for analytic and polynomial activation
Qualitative neural network approximation over R and C: Elementary proofs for analytic and polynomial activation
Josiah Park
Stephan Wojtowytsch
26
1
0
25 Mar 2022
A Note on Machine Learning Approach for Computational Imaging
A Note on Machine Learning Approach for Computational Imaging
Bin Dong
26
0
0
24 Feb 2022
Designing Universal Causal Deep Learning Models: The Geometric
  (Hyper)Transformer
Designing Universal Causal Deep Learning Models: The Geometric (Hyper)Transformer
Beatrice Acciaio
Anastasis Kratsios
G. Pammer
OOD
52
20
0
31 Jan 2022
Deep Nonparametric Estimation of Operators between Infinite Dimensional
  Spaces
Deep Nonparametric Estimation of Operators between Infinite Dimensional Spaces
Hao Liu
Haizhao Yang
Minshuo Chen
T. Zhao
Wenjing Liao
32
36
0
01 Jan 2022
Deep Network Approximation in Terms of Intrinsic Parameters
Deep Network Approximation in Terms of Intrinsic Parameters
Zuowei Shen
Haizhao Yang
Shijun Zhang
21
9
0
15 Nov 2021
Deep Network Approximation: Achieving Arbitrary Accuracy with Fixed
  Number of Neurons
Deep Network Approximation: Achieving Arbitrary Accuracy with Fixed Number of Neurons
Zuowei Shen
Haizhao Yang
Shijun Zhang
56
36
0
06 Jul 2021
Optimal Approximation Rate of ReLU Networks in terms of Width and Depth
Optimal Approximation Rate of ReLU Networks in terms of Width and Depth
Zuowei Shen
Haizhao Yang
Shijun Zhang
103
115
0
28 Feb 2021
Quantitative approximation results for complex-valued neural networks
Quantitative approximation results for complex-valued neural networks
A. Caragea
D. Lee
J. Maly
G. Pfander
F. Voigtlaender
13
5
0
25 Feb 2021
Size and Depth Separation in Approximating Benign Functions with Neural
  Networks
Size and Depth Separation in Approximating Benign Functions with Neural Networks
Gal Vardi
Daniel Reichman
T. Pitassi
Ohad Shamir
28
7
0
30 Jan 2021
Reproducing Activation Function for Deep Learning
Reproducing Activation Function for Deep Learning
Senwei Liang
Liyao Lyu
Chunmei Wang
Haizhao Yang
36
21
0
13 Jan 2021
The universal approximation theorem for complex-valued neural networks
The universal approximation theorem for complex-valued neural networks
F. Voigtlaender
27
62
0
06 Dec 2020
Neural Network Approximation: Three Hidden Layers Are Enough
Neural Network Approximation: Three Hidden Layers Are Enough
Zuowei Shen
Haizhao Yang
Shijun Zhang
30
115
0
25 Oct 2020
Analysis of the rate of convergence of fully connected deep neural
  network regression estimates with smooth activation function
Analysis of the rate of convergence of fully connected deep neural network regression estimates with smooth activation function
S. Langer
19
22
0
12 Oct 2020
Phase Transitions in Rate Distortion Theory and Deep Learning
Phase Transitions in Rate Distortion Theory and Deep Learning
Philipp Grohs
Andreas Klotz
F. Voigtlaender
14
7
0
03 Aug 2020
The Kolmogorov-Arnold representation theorem revisited
The Kolmogorov-Arnold representation theorem revisited
Johannes Schmidt-Hieber
30
126
0
31 Jul 2020
Expressivity of Deep Neural Networks
Expressivity of Deep Neural Networks
Ingo Gühring
Mones Raslan
Gitta Kutyniok
16
51
0
09 Jul 2020
Two-Layer Neural Networks for Partial Differential Equations:
  Optimization and Generalization Theory
Two-Layer Neural Networks for Partial Differential Equations: Optimization and Generalization Theory
Tao Luo
Haizhao Yang
32
74
0
28 Jun 2020
Approximation in shift-invariant spaces with deep ReLU neural networks
Approximation in shift-invariant spaces with deep ReLU neural networks
Yunfei Yang
Zhen Li
Yang Wang
34
14
0
25 May 2020
On Deep Instrumental Variables Estimate
On Deep Instrumental Variables Estimate
Ruiqi Liu
Zuofeng Shang
Guang Cheng
26
26
0
30 Apr 2020
Deep Network Approximation for Smooth Functions
Deep Network Approximation for Smooth Functions
Jianfeng Lu
Zuowei Shen
Haizhao Yang
Shijun Zhang
67
247
0
09 Jan 2020
Benefits of depth in neural networks
Benefits of depth in neural networks
Matus Telgarsky
153
603
0
14 Feb 2016
1