311

Electron-Proton Dynamics in Deep Learning

Information Technology Convergence and Services (ITCS), 2017
Abstract

We study the efficacy of learning neural networks with neural networks by the (stochastic) gradient descent method. While gradient descent enjoys empirical success in a variety of applications, there is a lack of theoretical guarantees that explains the practical utility of deep learning. We focus on two-layer neural networks with a linear activation on the output node. We show that under some mild assumptions and certain classes of activation functions, gradient descent does learn the parameters of the neural network and converges to the global minima. Using a node-wise gradient descent algorithm, we show that learning can be done in finite, sometimes poly(d,1/ϵ)poly(d,1/\epsilon), time and sample complexity.

View on arXiv
Comments on this paper