31
194

Scaling description of generalization with number of parameters in deep learning

Abstract

Supervised deep learning involves the training of neural networks with a large number NN of parameters. For large enough NN, in the so-called over-parametrized regime, one can essentially fit the training data points. Sparsity-based arguments would suggest that the generalization error increases as NN grows past a certain threshold NN^{*}. Instead, empirical studies have shown that in the over-parametrized regime, generalization error keeps decreasing with NN. We resolve this paradox through a new framework. We rely on the so-called Neural Tangent Kernel, which connects large neural nets to kernel methods, to show that the initialization causes finite-size random fluctuations fNfˉNN1/4\|f_{N}-\bar{f}_{N}\|\sim N^{-1/4} of the neural net output function fNf_{N} around its expectation fˉN\bar{f}_{N}. These affect the generalization error ϵN\epsilon_{N} for classification: under natural assumptions, it decays to a plateau value ϵ\epsilon_{\infty} in a power-law fashion N1/2\sim N^{-1/2}. This description breaks down at a so-called jamming transition N=NN=N^{*}. At this threshold, we argue that fN\|f_{N}\| diverges. This result leads to a plausible explanation for the cusp in test error known to occur at NN^{*}. Our results are confirmed by extensive empirical observations on the MNIST and CIFAR image datasets. Our analysis finally suggests that, given a computational envelope, the smallest generalization error is obtained using several networks of intermediate sizes, just beyond NN^{*}, and averaging their outputs.

View on arXiv
Comments on this paper