Relative Error Tensor Low Rank Approximation

We consider relative error low rank approximation of with respect to the Frobenius norm: given an order- tensor , output a rank- tensor for which OPT, where OPT . Despite the success on obtaining relative error low rank approximations for matrices, no such results were known for tensors. One structural issue is that there may be no rank- tensor achieving the above infinum. Another, computational issue, is that an efficient relative error low rank approximation algorithm for tensors would allow one to compute the rank of a tensor, which is NP-hard. We bypass these issues via (1) bicriteria and (2) parameterized complexity solutions: (1) We give an algorithm which outputs a rank tensor for which OPT in time in the real RAM model. Here is the number of non-zero entries in . (2) We give an algorithm for any which outputs a rank tensor for which OPT and runs in time in the unit cost RAM model. For outputting a rank- tensor, or even a bicriteria solution with rank- for a certain constant , we show a time lower bound under the Exponential Time Hypothesis. Our results give the first relative error low rank approximations for tensors for a large number of robust error measures for which nothing was known, as well as column row and tube subset selection. We also obtain new results for matrices, such as -time CUR decompositions, improving previous -time algorithms, which may be of independent interest.
View on arXiv