324

Polylogarithmic width suffices for gradient descent to achieve arbitrarily small test error with shallow ReLU networks

International Conference on Learning Representations (ICLR), 2019
Abstract

Recent work has revealed that overparameterized networks trained by gradient descent achieve arbitrarily low training error, and sometimes even low test error. The required width, however, is always polynomial in at least one of the sample size nn, the (inverse) training error 1/ϵ1/\epsilon, and the (inverse) failure probability 1/δ1/\delta. This work shows that O~(1/ϵ)\widetilde{O}(1/\epsilon) iterations of gradient descent on two-layer networks of any width exceeding polylog(n,1/ϵ,1/δ)\mathrm{polylog}(n,1/\epsilon,1/\delta) and Ω~(1/ϵ2)\widetilde{\Omega}(1/\epsilon^2) training examples suffices to achieve a test error of ϵ\epsilon. The analysis further relies upon a margin property of the limiting kernel, which is guaranteed positive, and can distinguish between true labels and random labels.

View on arXiv
Comments on this paper