583

Statistical Mechanics of Generalization in Kernel Regression

Nature Communications (Nat Commun), 2020
Abstract

Generalization beyond a training dataset is a main goal of machine learning. We investigate generalization error in kernel regression using statistical mechanics, deriving an analytical expression applicable to any kernel. We discuss applications to a kernel with finite number of spectral modes. Then, focusing on the broad class of rotation invariant kernels, which is relevant to training deep neural networks in the infinite-width limit, we show several phenomena. When data is drawn from a spherically symmetric distribution and the number of input dimensions, DD, is large, we find that multiple learning stages exist, one for each scaling of the number of training samples with OD(DK)\mathcal{O}_D(D^K) where KZ+K\in Z^+. The behavior of the learning curve in each stage is related to an \textit{effective} noise and regularizer that are related to the tail of the kernel and target function spectra. When effective regularization is zero, we identify a first order phase transition that corresponds to a divergence in the generalization error. Each learning stage can exhibit sample-wise \textit{double descent}, where learning curves show non-monotonic sample size dependence. For each stage an optimal value of effective regularizer exists, equal to the effective noise variance, that gives minimum generalization error.

View on arXiv
Comments on this paper