We perform a study on kernel regression for large-dimensional data (where the sample size is polynomially depending on the dimension of the samples, i.e., for some ). We first build a general tool to characterize the upper bound and the minimax lower bound of kernel regression for large dimensional data through the Mendelson complexity and the metric entropy respectively. When the target function falls into the RKHS associated with a (general) inner product model defined on , we utilize the new tool to show that the minimax rate of the excess risk of kernel regression is when for . We then further determine the optimal rate of the excess risk of kernel regression for all the and find that the curve of optimal rate varying along exhibits several new phenomena including the multiple descent behavior and the periodic plateau behavior. As an application, For the neural tangent kernel (NTK), we also provide a similar explicit description of the curve of optimal rate. As a direct corollary, we know these claims hold for wide neural networks as well.
View on arXiv