Width Provably Matters in Optimization for Deep Linear Neural Networks

Abstract
We prove that for an -layer fully-connected linear neural network, if the width of every hidden layer is , where and are the rank and the condition number of the input data, and is the output dimension, then gradient descent with Gaussian random initialization converges to a global minimum at a linear rate. The number of iterations to find an -suboptimal solution is . Our polynomial upper bound on the total running time for wide deep linear networks and the lower bound for narrow deep linear neural networks [Shamir, 2018] together demonstrate that wide layers are necessary for optimizing deep models.
View on arXivComments on this paper