Approximation Rates for Shallow ReLU Neural Networks on Sobolev Spaces via the Radon Transform
Main:10 Pages
Bibliography:3 Pages
1 Tables
Abstract
Let be a bounded domain. We consider the problem of how efficiently shallow neural networks with the ReLU activation function can approximate functions from Sobolev spaces with error measured in the -norm. Utilizing the Radon transform and recent results from discrepancy theory, we provide a simple proof of nearly optimal approximation rates in a variety of cases, including when , , and . The rates we derive are optimal up to logarithmic factors, and significantly generalize existing results. An interesting consequence is that the adaptivity of shallow ReLU neural networks enables them to obtain optimal approximation rates for smoothness up to order , even though they represent piecewise polynomials of fixed degree .
View on arXivComments on this paper
