647

What Can ResNet Learn Efficiently, Going Beyond Kernels?

Neural Information Processing Systems (NeurIPS), 2019
Abstract

How can neural networks such as ResNet efficiently\textit{efficiently} learn CIFAR-10 with test accuracy more than 96%, while other methods, especially kernel methods, fall far behind? Can we more provide theoretical justifications for this gap? There is an influential line of work relating neural networks to kernels in the over-parameterized regime, proving that they can learn certain concept class that is also learnable by kernels, with similar test error. Yet, can we show neural networks provably learn some concept class better\textit{better} than kernels? We answer this positively in the PAC learning language. We prove neural networks can efficiently learn a notable class of functions, including those defined by three-layer residual networks with smooth activations, without any distributional assumption. At the same time, we prove there are simple functions in this class that the test error obtained by neural networks can be much smaller\textit{much smaller} than any\textit{any} "generic" kernel method, including neural tangent kernels, conjugate kernels, etc. The main intuition is that multi-layer\textit{multi-layer} neural networks can implicitly perform hierarchal learning using different layers, which reduces the sample complexity comparing to "one-shot" learning algorithms such as kernel methods.

View on arXiv
Comments on this paper