Scaling description of generalization with number of parameters in deep
learning
The advent of deep learning is a breakthrough in artificial intelligence, for which a theoretical understanding is lacking. Supervised deep learning involves the training of neural networks with a large number of parameters. For large enough , in the so-called over-parametrized regime, one can essentially fit the training data points. Sparsity-based arguments would suggest that the generalization error increases as grows past a certain threshold . Instead, empirical studies have shown that in the over-parametrized regime, generalization error keeps decreasing with . We resolve this paradox, through a new framework. We rely on the so-called Neural Tangent Kernel, which connects large neural nets to kernel methods, to show that the initialization causes finite-size random fluctuations of the neural net output function around its expectation . These affect the generalization error for classification: under natural assumptions, it decays to a plateau value in a power-law fashion . This description breaks down at a so-called jamming transition . At this threshold, we argue that diverges. This result leads to a plausible explanation for the cusp in test error known to occur at . Our results are confirmed by extensive empirical observations on the MNIST and CIFAR image datasets. Our analysis finally suggests that, given a computational envelope, it is best to use several nets of intermediate sizes, just beyond , and to average their outputs.
View on arXiv