Learning Depth-Three Neural Networks in Polynomial Time
We give a polynomial-time algorithm for learning neural networks with one hidden layer of sigmoids feeding into any smooth, monotone activation function (e.g. Sigmoid or ReLU). We make no assumptions on the structure of the network, and the algorithm succeeds with respect to any distribution on the unit ball in dimensions (hidden weight vectors also have unit norm). This is the first assumption-free, provably efficient algorithm for learning neural networks with more than one hidden layer. Our algorithm-- Alphatron-- is a simple, iterative update rule that combines isotonic regression with kernel methods. It outputs a hypothesis that yields efficient oracle access to interpretable features. It also suggests a new approach to Boolean function learning via smooth relaxations of hard thesholds, sidestepping traditional hardness results from computational learning theory. As applications, we obtain the first provably correct algorithms for common schemes in multiple-instance learning (in the difficult case where the examples within each bag are not identically distributed) as well the first polynomial-time algorithm for learning intersections of a polynomial number of halfspaces with a margin.
View on arXiv