478

Square Deal: Lower Bounds and Improved Relaxations for Tensor Recovery

International Conference on Machine Learning (ICML), 2013
Abstract

Recovering a low-rank tensor from incomplete information is a recurring problem in signal processing and machine learning. The most popular convex relaxation of this problem minimizes the sum of the nuclear norms of the unfoldings of the tensor. We show that this approach can be substantially suboptimal: reliably recovering a K-way tensor of length n and Tucker rank r from Gaussian measurements requires Ω(rnK1)\Omega(rn^{K-1}) observations. In contrast, a certain (intractable) nonconvex formulation needs only O(rK+nrK)O(r^K+nrK) observations. We introduce a very simple, new convex relaxation, which partially bridges this gap. Our new formulation succeeds with O(rK/2nK/2)O(r^{\lfloor K/2 \rfloor} n^{\lceil K/2 \rceil}) observations. While these results pertain to Gaussian measurements, simulations strongly suggest that the new norm also outperforms the sum of nuclear norms for tensor completion from a random subset of entries. Our lower bounds for the sum-of-nuclear-norm model follow from a new result on simultaneously structured models, which may be of independent interest for matrix and vector recovery problems.

View on arXiv
Comments on this paper