ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1906.09477
  4. Cited By
The phase diagram of approximation rates for deep neural networks
v1v2 (latest)

The phase diagram of approximation rates for deep neural networks

Neural Information Processing Systems (NeurIPS), 2019
22 June 2019
Dmitry Yarotsky
Anton Zhevnerchuk
ArXiv (abs)PDFHTML

Papers citing "The phase diagram of approximation rates for deep neural networks"

50 / 87 papers shown
Learning from one graph: transductive learning guarantees via the geometry of small random worlds
Learning from one graph: transductive learning guarantees via the geometry of small random worlds
Nils Detering
Luca Galimberti
Anastasis Kratsios
Giulia Livieri
A. M. Neuman
222
3
0
08 Sep 2025
Beyond Universal Approximation Theorems: Algorithmic Uniform Approximation by Neural Networks Trained with Noisy Data
Beyond Universal Approximation Theorems: Algorithmic Uniform Approximation by Neural Networks Trained with Noisy Data
Anastasis Kratsios
Tin Sum Cheng
Daniel Roy
AAML
200
0
0
31 Aug 2025
Deep Neural Networks with General Activations: Super-Convergence in Sobolev Norms
Deep Neural Networks with General Activations: Super-Convergence in Sobolev Norms
Yahong Yang
Juncai He
ELMAI4CE
192
3
0
07 Aug 2025
Transformers Can Overcome the Curse of Dimensionality: A Theoretical Study from an Approximation Perspective
Transformers Can Overcome the Curse of Dimensionality: A Theoretical Study from an Approximation Perspective
Yuling Jiao
Yanming Lai
Yang Wang
Bokai Yan
321
2
0
18 Apr 2025
Statistically guided deep learning
Statistically guided deep learning
Michael Kohler
A. Krzyżak
ODLBDL
412
0
0
11 Apr 2025
Approximation properties of neural ODEs
Approximation properties of neural ODEs
Arturo De Marinis
Davide Murari
E. Celledoni
Nicola Guglielmi
B. Owren
Francesco Tudisco
304
3
0
19 Mar 2025
Fourier Multi-Component and Multi-Layer Neural Networks: Unlocking High-Frequency Potential
Fourier Multi-Component and Multi-Layer Neural Networks: Unlocking High-Frequency Potential
Shijun Zhang
Hongkai Zhao
Yimin Zhong
Haomin Zhou
408
5
0
26 Feb 2025
Curse of Dimensionality in Neural Network Optimization
Curse of Dimensionality in Neural Network Optimization
Sanghoon Na
Haizhao Yang
434
0
0
07 Feb 2025
On the expressiveness and spectral bias of KANs
On the expressiveness and spectral bias of KANsInternational Conference on Learning Representations (ICLR), 2024
Yixuan Wang
Jonathan W. Siegel
Ziming Liu
Thomas Y. Hou
528
42
0
02 Oct 2024
Approximation Bounds for Recurrent Neural Networks with Application to Regression
Approximation Bounds for Recurrent Neural Networks with Application to Regression
Yuling Jiao
Yang Wang
Bokai Yan
295
1
0
09 Sep 2024
On the optimal approximation of Sobolev and Besov functions using deep ReLU neural networks
On the optimal approximation of Sobolev and Besov functions using deep ReLU neural networksApplied and Computational Harmonic Analysis (ACHA), 2024
Yunfei Yang
515
11
0
02 Sep 2024
Deep Limit Model-free Prediction in Regression
Deep Limit Model-free Prediction in Regression
Kejin Wu
D. Politis
OOD
449
1
0
18 Aug 2024
On the estimation rate of Bayesian PINN for inverse problems
On the estimation rate of Bayesian PINN for inverse problems
Yi Sun
Debarghya Mukherjee
Yves Atchadé
PINN
319
1
0
21 Jun 2024
Deep Ridgelet Transform and Unified Universality Theorem for Deep and Shallow Joint-Group-Equivariant Machines
Deep Ridgelet Transform and Unified Universality Theorem for Deep and Shallow Joint-Group-Equivariant Machines
Sho Sonoda
Yuka Hashimoto
Isao Ishikawa
Masahiro Ikeda
472
0
0
22 May 2024
Approximation and Gradient Descent Training with Neural Networks
Approximation and Gradient Descent Training with Neural Networks
G. Welper
297
2
0
19 May 2024
Error Analysis of Three-Layer Neural Network Trained with PGD for Deep
  Ritz Method
Error Analysis of Three-Layer Neural Network Trained with PGD for Deep Ritz MethodIEEE Transactions on Information Theory (IEEE Trans. Inf. Theory), 2024
Yuling Jiao
Yanming Lai
Yang Wang
AI4CE
214
1
0
19 May 2024
Scalable Subsampling Inference for Deep Neural Networks
Scalable Subsampling Inference for Deep Neural NetworksACM / IMS Journal of Data Science (JIDS), 2024
Kejin Wu
D. Politis
213
4
0
14 May 2024
Mixture of Experts Softens the Curse of Dimensionality in Operator Learning
Mixture of Experts Softens the Curse of Dimensionality in Operator Learning
Anastasis Kratsios
Takashi Furuya
Jose Antonio Lara Benitez
Matti Lassas
Maarten V. de Hoop
445
19
0
13 Apr 2024
Learning WENO for entropy stable schemes to solve conservation laws
Learning WENO for entropy stable schemes to solve conservation laws
Philip Charles
Deep Ray
330
2
0
21 Mar 2024
Operator Learning: Algorithms and Analysis
Operator Learning: Algorithms and Analysis
Nikola B. Kovachki
S. Lanthaler
Andrew M. Stuart
572
83
0
24 Feb 2024
Approximation Rates and VC-Dimension Bounds for (P)ReLU MLP Mixture of
  Experts
Approximation Rates and VC-Dimension Bounds for (P)ReLU MLP Mixture of Experts
Anastasis Kratsios
Haitz Sáez de Ocáriz Borde
Takashi Furuya
Marc T. Law
MoE
574
2
0
05 Feb 2024
Deeper or Wider: A Perspective from Optimal Generalization Error with Sobolev Loss
Deeper or Wider: A Perspective from Optimal Generalization Error with Sobolev LossInternational Conference on Machine Learning (ICML), 2024
Yahong Yang
Juncai He
AI4CE
599
13
0
31 Jan 2024
Semi-Supervised Deep Sobolev Regression: Estimation and Variable Selection by ReQU Neural Network
Semi-Supervised Deep Sobolev Regression: Estimation and Variable Selection by ReQU Neural Network
Zhao Ding
Chenguang Duan
Yuling Jiao
Jerry Zhijian Yang
238
1
0
09 Jan 2024
Universal Consistency of Wide and Deep ReLU Neural Networks and Minimax
  Optimal Convergence Rates for Kolmogorov-Donoho Optimal Function Classes
Universal Consistency of Wide and Deep ReLU Neural Networks and Minimax Optimal Convergence Rates for Kolmogorov-Donoho Optimal Function ClassesInternational Conference on Machine Learning (ICML), 2024
Hyunouk Ko
Xiaoming Huo
316
1
0
08 Jan 2024
Approximating Langevin Monte Carlo with ResNet-like Neural Network
  architectures
Approximating Langevin Monte Carlo with ResNet-like Neural Network architectures
Charles Miranda
Janina Enrica Schutte
David Sommer
Martin Eigel
290
3
0
06 Nov 2023
An Operator Learning Framework for Spatiotemporal Super-resolution of
  Scientific Simulations
An Operator Learning Framework for Spatiotemporal Super-resolution of Scientific Simulations
Valentin Duruisseaux
Amit Chakraborty
AI4CE
325
1
0
04 Nov 2023
Transformers Can Solve Non-Linear and Non-Markovian Filtering Problems in Continuous Time For Conditionally Gaussian Signals
Transformers Can Solve Non-Linear and Non-Markovian Filtering Problems in Continuous Time For Conditionally Gaussian Signals
Blanka Hovart
Anastasis Kratsios
Yannick Limmer
Xuwei Yang
417
1
0
30 Oct 2023
Deep ReLU networks and high-order finite element methods II: Chebyshev
  emulation
Deep ReLU networks and high-order finite element methods II: Chebyshev emulationComputers and Mathematics with Applications (CMA), 2023
J. Opschoor
Christoph Schwab
368
7
0
11 Oct 2023
Deep Ridgelet Transform: Voice with Koopman Operator Proves Universality
  of Formal Deep Networks
Deep Ridgelet Transform: Voice with Koopman Operator Proves Universality of Formal Deep Networks
Sho Sonoda
Yuka Hashimoto
Isao Ishikawa
Masahiro Ikeda
332
3
0
05 Oct 2023
Approximation Results for Gradient Descent trained Neural Networks
Approximation Results for Gradient Descent trained Neural Networks
G. Welper
207
1
0
09 Sep 2023
Distribution learning via neural differential equations: a nonparametric
  statistical perspective
Distribution learning via neural differential equations: a nonparametric statistical perspectiveJournal of machine learning research (JMLR), 2023
Youssef Marzouk
Zhi Ren
Sven Wang
Jakob Zech
279
22
0
03 Sep 2023
Optimal Approximation and Learning Rates for Deep Convolutional Neural
  Networks
Optimal Approximation and Learning Rates for Deep Convolutional Neural Networks
Shao-Bo Lin
222
1
0
07 Aug 2023
Deep Operator Network Approximation Rates for Lipschitz Operators
Deep Operator Network Approximation Rates for Lipschitz OperatorsAnalysis and Applications (AA), 2023
Ch. Schwab
A. Stein
Jakob Zech
216
20
0
19 Jul 2023
On Enhancing Expressive Power via Compositions of Single Fixed-Size ReLU
  Network
On Enhancing Expressive Power via Compositions of Single Fixed-Size ReLU NetworkInternational Conference on Machine Learning (ICML), 2023
Shijun Zhang
Jianfeng Lu
Hongkai Zhao
CoGe
267
9
0
29 Jan 2023
Deep Learning and Computational Physics (Lecture Notes)
Deep Learning and Computational Physics (Lecture Notes)
Deep Ray
Orazio Pinti
Assad A. Oberai
PINNAI4CE
187
8
0
03 Jan 2023
Optimal Approximation Rates for Deep ReLU Neural Networks on Sobolev and
  Besov Spaces
Optimal Approximation Rates for Deep ReLU Neural Networks on Sobolev and Besov SpacesJournal of machine learning research (JMLR), 2022
Jonathan W. Siegel
738
56
0
25 Nov 2022
Instance-Dependent Generalization Bounds via Optimal Transport
Instance-Dependent Generalization Bounds via Optimal TransportJournal of machine learning research (JMLR), 2022
Songyan Hou
Parnian Kassraie
Anastasis Kratsios
Andreas Krause
Jonas Rothfuss
614
13
0
02 Nov 2022
Analysis of the rate of convergence of an over-parametrized deep neural
  network estimate learned by gradient descent
Analysis of the rate of convergence of an over-parametrized deep neural network estimate learned by gradient descentIEEE Transactions on Information Theory (IEEE Trans. Inf. Theory), 2022
Michael Kohler
A. Krzyżak
294
12
0
04 Oct 2022
Achieve the Minimum Width of Neural Networks for Universal Approximation
Achieve the Minimum Width of Neural Networks for Universal ApproximationInternational Conference on Learning Representations (ICLR), 2022
Yongqiang Cai
241
27
0
23 Sep 2022
Approximation results for Gradient Descent trained Shallow Neural
  Networks in $1d$
Approximation results for Gradient Descent trained Shallow Neural Networks in 1d1d1d
R. Gentile
G. Welper
ODL
362
9
0
17 Sep 2022
On the universal consistency of an over-parametrized deep neural network
  estimate learned by gradient descent
On the universal consistency of an over-parametrized deep neural network estimate learned by gradient descentAnnals of the Institute of Statistical Mathematics (AISM), 2022
Selina Drews
Michael Kohler
256
19
0
30 Aug 2022
The BUTTER Zone: An Empirical Study of Training Dynamics in Fully
  Connected Neural Networks
The BUTTER Zone: An Empirical Study of Training Dynamics in Fully Connected Neural Networks
Charles Edison Tripp
J. Perr-Sauer
L. Hayne
M. Lunacek
Jamil Gafur
AI4CE
395
2
0
25 Jul 2022
Consistency of Neural Networks with Regularization
Consistency of Neural Networks with Regularization
Xiaoxi Shen
Jinghang Lin
210
0
0
22 Jun 2022
Finite Expression Method for Solving High-Dimensional Partial Differential Equations
Finite Expression Method for Solving High-Dimensional Partial Differential Equations
Senwei Liang
Haizhao Yang
497
22
0
21 Jun 2022
Simultaneous approximation of a smooth function and its derivatives by
  deep neural networks with piecewise-polynomial activations
Simultaneous approximation of a smooth function and its derivatives by deep neural networks with piecewise-polynomial activationsNeural Networks (NN), 2022
Denis Belomestny
A. Naumov
Nikita Puchkin
S. Samsonov
198
38
0
20 Jun 2022
A general approximation lower bound in $L^p$ norm, with applications to
  feed-forward neural networks
A general approximation lower bound in LpL^pLp norm, with applications to feed-forward neural networksNeural Information Processing Systems (NeurIPS), 2022
El Mehdi Achour
Armand Foucault
Sébastien Gerchinovitz
Franccois Malgouyres
327
13
0
09 Jun 2022
Neural Network Architecture Beyond Width and Depth
Neural Network Architecture Beyond Width and DepthNeural Information Processing Systems (NeurIPS), 2022
Zuowei Shen
Haizhao Yang
Shijun Zhang
3DVMDE
610
23
0
19 May 2022
Do ReLU Networks Have An Edge When Approximating Compactly-Supported
  Functions?
Do ReLU Networks Have An Edge When Approximating Compactly-Supported Functions?
Anastasis Kratsios
Behnoosh Zamanlooy
MLT
304
5
0
24 Apr 2022
Qualitative neural network approximation over R and C: Elementary proofs
  for analytic and polynomial activation
Qualitative neural network approximation over R and C: Elementary proofs for analytic and polynomial activation
Josiah Park
Stephan Wojtowytsch
274
3
0
25 Mar 2022
A Note on Machine Learning Approach for Computational Imaging
A Note on Machine Learning Approach for Computational Imaging
Bin Dong
241
0
0
24 Feb 2022
12
Next
Page 1 of 2