This work studies the sampling complexity of learning with ReLU neural networks and neural operators. For mappings belonging to relevant approximation spaces, we derive upper bounds on the best-possible convergence rate of any learning algorithm, with respect to the number of samples. In the finite-dimensional case, these bounds imply a gap between the parametric and sampling complexities of learning, known as the \emph{theory-to-practice gap}. In this work, a unified treatment of the theory-to-practice gap is achieved in a general -setting, while at the same time improving available bounds in the literature. Furthermore, based on these results the theory-to-practice gap is extended to the infinite-dimensional setting of operator learning. Our results apply to Deep Operator Networks and integral kernel-based neural operators, including the Fourier neural operator. We show that the best-possible convergence rate in a Bochner -norm is bounded by Monte-Carlo rates of order .
View on arXiv@article{grohs2025_2503.18219, title={ Theory-to-Practice Gap for Neural Networks and Neural Operators }, author={ Philipp Grohs and Samuel Lanthaler and Margaret Trautner }, journal={arXiv preprint arXiv:2503.18219}, year={ 2025 } }