32
57

Analysis of a Two-Layer Neural Network via Displacement Convexity

Abstract

Fitting a function by using linear combinations of a large number NN of `simple' components is one of the most fruitful ideas in statistical learning. This idea lies at the core of a variety of methods, from two-layer neural networks to kernel regression, to boosting. In general, the resulting risk minimization problem is non-convex and is solved by gradient descent or its variants. Unfortunately, little is known about global convergence properties of these approaches. Here we consider the problem of learning a concave function ff on a compact convex domain ΩRd\Omega\subseteq {\mathbb R}^d, using linear combinations of `bump-like' components (neurons). The parameters to be fitted are the centers of NN bumps, and the resulting empirical risk minimization problem is highly non-convex. We prove that, in the limit in which the number of neurons diverges, the evolution of gradient descent converges to a Wasserstein gradient flow in the space of probability distributions over Ω\Omega. Further, when the bump width δ\delta tends to 00, this gradient flow has a limit which is a viscous porous medium equation. Remarkably, the cost function optimized by this gradient flow exhibits a special property known as displacement convexity, which implies exponential convergence rates for NN\to\infty, δ0\delta\to 0. Surprisingly, this asymptotic theory appears to capture well the behavior for moderate values of δ,N\delta, N. Explaining this phenomenon, and understanding the dependence on δ,N\delta,N in a quantitative manner remains an outstanding challenge.

View on arXiv
Comments on this paper