Square Deal: Lower Bounds and Improved Relaxations for Tensor Recovery
Recovering a low-rank tensor from incomplete information is a recurring problem in signal processing and machine learning. The most popular convex relaxation of this problem minimizes the sum of the nuclear norms of the unfoldings of the tensor. We show that this approach can be substantially suboptimal: reliably recovering a K-way tensor of length n and Tucker rank r from Gaussian measurements requires observations. In contrast, a certain (intractable) nonconvex formulation needs only observations. We introduce a very simple, new convex relaxation, which partially bridges this gap. Our new formulation succeeds with observations. While these results pertain to Gaussian measurements, simulations strongly suggest that the new norm also outperforms the sum of nuclear norms for tensor completion from a random subset of entries. Our lower bounds for the sum-of-nuclear-norm model follow from a new result on simultaneously structured models, which may be of independent interest for matrix and vector recovery problems.
View on arXiv