ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.18219
36
1

Theory-to-Practice Gap for Neural Networks and Neural Operators

23 March 2025
Philipp Grohs
S. Lanthaler
Margaret Trautner
ArXivPDFHTML
Abstract

This work studies the sampling complexity of learning with ReLU neural networks and neural operators. For mappings belonging to relevant approximation spaces, we derive upper bounds on the best-possible convergence rate of any learning algorithm, with respect to the number of samples. In the finite-dimensional case, these bounds imply a gap between the parametric and sampling complexities of learning, known as the \emph{theory-to-practice gap}. In this work, a unified treatment of the theory-to-practice gap is achieved in a general LpL^pLp-setting, while at the same time improving available bounds in the literature. Furthermore, based on these results the theory-to-practice gap is extended to the infinite-dimensional setting of operator learning. Our results apply to Deep Operator Networks and integral kernel-based neural operators, including the Fourier neural operator. We show that the best-possible convergence rate in a Bochner LpL^pLp-norm is bounded by Monte-Carlo rates of order 1/p1/p1/p.

View on arXiv
@article{grohs2025_2503.18219,
  title={ Theory-to-Practice Gap for Neural Networks and Neural Operators },
  author={ Philipp Grohs and Samuel Lanthaler and Margaret Trautner },
  journal={arXiv preprint arXiv:2503.18219},
  year={ 2025 }
}
Comments on this paper