151

Agnostic Learning of a Single Neuron with Gradient Descent

Abstract

We consider the problem of learning the best-fitting single neuron as measured by the expected square loss E(x,y)D[(σ(wx)y)2]\mathbb{E}_{(x,y)\sim \mathcal{D}}[(\sigma(w^\top x)-y)^2] over some unknown joint distribution D\mathcal{D} by using gradient descent to minimize the empirical risk induced by a set of i.i.d. samples SDnS\sim \mathcal{D}^n. The activation function σ\sigma is an arbitrary Lipschitz and non-decreasing function, making the optimization problem nonconvex and nonsmooth in general, and covers typical neural network activation functions and inverse link functions in the generalized linear model setting. In the agnostic PAC learning setting, where no assumption on the relationship between the labels yy and the input xx is made, if the optimal population risk is OPT\mathsf{OPT}, we show that gradient descent achieves population risk O(OPT1/2)+ϵO(\mathsf{OPT}^{1/2})+\epsilon in polynomial time and sample complexity. When labels take the form y=σ(vx)+ξy = \sigma(v^\top x) + \xi for zero-mean sub-Gaussian noise ξ\xi, we show that gradient descent achieves population risk OPT+ϵ\mathsf{OPT} + \epsilon. Our sample complexity and runtime guarantees are (almost) dimension independent, and when σ\sigma is strictly increasing and Lipschitz, require no distributional assumptions beyond boundedness. For ReLU, we show the same results under a nondegeneracy assumption for the marginal distribution of the input. To the best of our knowledge, this is the first result for agnostic learning of a single neuron using gradient descent.

View on arXiv
Comments on this paper