17
9

Average-Case Complexity of Tensor Decomposition for Low-Degree Polynomials

Abstract

Suppose we are given an nn-dimensional order-3 symmetric tensor T(Rn)3T \in (\mathbb{R}^n)^{\otimes 3} that is the sum of rr random rank-1 terms. The problem of recovering the rank-1 components is possible in principle when rn2r \lesssim n^2 but polynomial-time algorithms are only known in the regime rn3/2r \ll n^{3/2}. Similar "statistical-computational gaps" occur in many high-dimensional inference tasks, and in recent years there has been a flurry of work on explaining the apparent computational hardness in these problems by proving lower bounds against restricted (yet powerful) models of computation such as statistical queries (SQ), sum-of-squares (SoS), and low-degree polynomials (LDP). However, no such prior work exists for tensor decomposition, largely because its hardness does not appear to be explained by a "planted versus null" testing problem. We consider a model for random order-3 tensor decomposition where one component is slightly larger in norm than the rest (to break symmetry), and the components are drawn uniformly from the hypercube. We resolve the computational complexity in the LDP model: O(logn)O(\log n)-degree polynomial functions of the tensor entries can accurately estimate the largest component when rn3/2r \ll n^{3/2} but fail to do so when rn3/2r \gg n^{3/2}. This provides rigorous evidence suggesting that the best known algorithms for tensor decomposition cannot be improved, at least by known approaches. A natural extension of the result holds for tensors of any fixed order k3k \ge 3, in which case the LDP threshold is rnk/2r \sim n^{k/2}.

View on arXiv
Comments on this paper