Some Super-approximation Rates of ReLU Neural Networks for Korobov Functions
Yuwen Li
Guozhi Zhang
Main:14 Pages
2 Figures
Bibliography:2 Pages
Abstract
This paper examines the and norm approximation errors of ReLU neural networks for Korobov functions. In terms of network width and depth, we derive nearly optimal super-approximation error bounds of order in the norm and order in the norm, for target functions with mixed derivative of order in each direction. The analysis leverages sparse grid finite elements and the bit extraction technique. Our results improve upon classical lowest order and norm error bounds and demonstrate that the expressivity of neural networks is largely unaffected by the curse of dimensionality.
View on arXivComments on this paper
