287

Some Super-approximation Rates of ReLU Neural Networks for Korobov Functions

Yuwen Li
Guozhi Zhang
Main:14 Pages
2 Figures
Bibliography:2 Pages
Abstract

This paper examines the LpL_p and Wp1W^1_p norm approximation errors of ReLU neural networks for Korobov functions. In terms of network width and depth, we derive nearly optimal super-approximation error bounds of order 2m2m in the LpL_p norm and order 2m22m-2 in the Wp1W^1_p norm, for target functions with LpL_p mixed derivative of order mm in each direction. The analysis leverages sparse grid finite elements and the bit extraction technique. Our results improve upon classical lowest order LL_\infty and H1H^1 norm error bounds and demonstrate that the expressivity of neural networks is largely unaffected by the curse of dimensionality.

View on arXiv
Comments on this paper