11
93

Width Provably Matters in Optimization for Deep Linear Neural Networks

Abstract

We prove that for an LL-layer fully-connected linear neural network, if the width of every hidden layer is Ω~(Lrdoutκ3)\tilde\Omega (L \cdot r \cdot d_{\mathrm{out}} \cdot \kappa^3 ), where rr and κ\kappa are the rank and the condition number of the input data, and doutd_{\mathrm{out}} is the output dimension, then gradient descent with Gaussian random initialization converges to a global minimum at a linear rate. The number of iterations to find an ϵ\epsilon-suboptimal solution is O(κlog(1ϵ))O(\kappa \log(\frac{1}{\epsilon})). Our polynomial upper bound on the total running time for wide deep linear networks and the exp(Ω(L))\exp\left(\Omega\left(L\right)\right) lower bound for narrow deep linear neural networks [Shamir, 2018] together demonstrate that wide layers are necessary for optimizing deep models.

View on arXiv
Comments on this paper