14
0

Provable Acceleration of Nesterov's Accelerated Gradient for Rectangular Matrix Factorization and Linear Neural Networks

Abstract

We study the convergence rate of first-order methods for rectangular matrix factorization, which is a canonical nonconvex optimization problem. Specifically, given a rank-rr matrix ARm×n\mathbf{A}\in\mathbb{R}^{m\times n}, we prove that gradient descent (GD) can find a pair of ϵ\epsilon-optimal solutions XTRm×d\mathbf{X}_T\in\mathbb{R}^{m\times d} and YTRn×d\mathbf{Y}_T\in\mathbb{R}^{n\times d}, where drd\geq r, satisfying XTYTAFϵAF\lVert\mathbf{X}_T\mathbf{Y}_T^\top-\mathbf{A}\rVert_\mathrm{F}\leq\epsilon\lVert\mathbf{A}\rVert_\mathrm{F} in T=O(κ2log1ϵ)T=O(\kappa^2\log\frac{1}{\epsilon}) iterations with high probability, where κ\kappa denotes the condition number of A\mathbf{A}. Furthermore, we prove that Nesterov's accelerated gradient (NAG) attains an iteration complexity of O(κlog1ϵ)O(\kappa\log\frac{1}{\epsilon}), which is the best-known bound of first-order methods for rectangular matrix factorization. Different from small balanced random initialization in the existing literature, we adopt an unbalanced initialization, where X0\mathbf{X}_0 is large and Y0\mathbf{Y}_0 is 00. Moreover, our initialization and analysis can be further extended to linear neural networks, where we prove that NAG can also attain an accelerated linear convergence rate. In particular, we only require the width of the network to be greater than or equal to the rank of the output label matrix. In contrast, previous results achieving the same rate require excessive widths that additionally depend on the condition number and the rank of the input data matrix.

View on arXiv
Comments on this paper