155
v1v2 (latest)

Sharp Lower Bounds for Linearized ReLU^k Approximation on the Sphere

Main:12 Pages
Bibliography:3 Pages
Abstract

We prove a saturation theorem for linearized shallow ReLUk^k neural networks on the unit sphere Sd\mathbb S^d. For any antipodally quasi-uniform set of centers, if the target function has smoothness r>d+2k+12r>\tfrac{d+2k+1}{2}, then the best L2(Sd)\mathcal{L}^2(\mathbb S^d) approximation cannot converge faster than order nd+2k+12dn^{-\frac{d+2k+1}{2d}}. This lower bound matches existing upper bounds, thereby establishing the exact saturation order d+2k+12d\tfrac{d+2k+1}{2d} for such networks. Our results place linearized neural-network approximation firmly within the classical saturation framework and show that, although ReLUk^k networks outperform finite elements under equal degrees kk, this advantage is intrinsically limited.

View on arXiv
Comments on this paper