Near-Optimal Time-Sparsity Trade-Offs for Solving Noisy Linear Equations

We present a polynomial-time reduction from solving noisy linear equations over in dimension with a uniformly random coefficient matrix to noisy linear equations over in dimension where each row of the coefficient matrix has uniformly random support of size . This allows us to deduce the hardness of sparse problems from their dense counterparts. In particular, we derive hardness results in the following canonical settings. 1) Assuming the -dimensional (dense) LWE over a polynomial-size field takes time , -sparse LWE in dimension takes time 2) Assuming the -dimensional (dense) LPN over takes time , -sparse LPN in dimension takes time These running time lower bounds are nearly tight as both sparse problems can be solved in time given sufficiently many samples. We further give a reduction from -sparse LWE to noisy tensor completion. Concretely, composing the two reductions implies that order- rank- noisy tensor completion in takes time , assuming the exponential hardness of standard worst-case lattice problems.
View on arXiv