83
14

Beyond Lazy Training for Over-parameterized Tensor Decomposition

Abstract

Over-parametrization is an important technique in training neural networks. In both theory and practice, training a larger network allows the optimization algorithm to avoid bad local optimal solutions. In this paper we study a closely related tensor decomposition problem: given an ll-th order tensor in (Rd)l(R^d)^{\otimes l} of rank rr (where rdr\ll d), can variants of gradient descent find a rank mm decomposition where m>rm > r? We show that in a lazy training regime (similar to the NTK regime for neural networks) one needs at least m=Ω(dl1)m = \Omega(d^{l-1}), while a variant of gradient descent can find an approximate tensor when m=O(r2.5llogd)m = O^*(r^{2.5l}\log d). Our results show that gradient descent on over-parametrized objective could go beyond the lazy training regime and utilize certain low-rank structure in the data.

View on arXiv
Comments on this paper