v1v2 (latest)
Sharp Lower Bounds for Linearized ReLU^k Approximation on the Sphere

Main:12 Pages
Bibliography:3 Pages
Abstract
We prove a saturation theorem for linearized shallow ReLU neural networks on the unit sphere . For any antipodally quasi-uniform set of centers, if the target function has smoothness , then the best approximation cannot converge faster than order . This lower bound matches existing upper bounds, thereby establishing the exact saturation order for such networks. Our results place linearized neural-network approximation firmly within the classical saturation framework and show that, although ReLU networks outperform finite elements under equal degrees , this advantage is intrinsically limited.
View on arXivComments on this paper
