268

Statistical Optimality of Deep Wide Neural Networks

Journal of machine learning research (JMLR), 2023
Abstract

In this paper, we consider the generalization ability of deep wide feedforward ReLU neural networks defined on a bounded domain XRd\mathcal X \subset \mathbb R^{d}. We first demonstrate that the generalization ability of the neural network can be fully characterized by that of the corresponding deep neural tangent kernel (NTK) regression. We then investigate on the spectral properties of the deep NTK and show that the deep NTK is positive definite on X\mathcal{X} and its eigenvalue decay rate is (d+1)/d(d+1)/d. Thanks to the well established theories in kernel regression, we then conclude that multilayer wide neural networks trained by gradient descent with proper early stopping achieve the minimax rate, provided that the regression function lies in the reproducing kernel Hilbert space (RKHS) associated with the corresponding NTK. Finally, we illustrate that the overfitted multilayer wide neural networks can not generalize well on Sd\mathbb S^{d}.

View on arXiv
Comments on this paper