35
13
v1v2 (latest)

The Fine-Grained Hardness of Sparse Linear Regression

Abstract

Sparse linear regression is the well-studied inference problem where one is given a design matrix ARM×N\mathbf{A} \in \mathbb{R}^{M\times N} and a response vector bRM\mathbf{b} \in \mathbb{R}^M, and the goal is to find a solution xRN\mathbf{x} \in \mathbb{R}^{N} which is kk-sparse (that is, it has at most kk non-zero coordinates) and minimizes the prediction error Axb2\|\mathbf{A} \mathbf{x} - \mathbf{b}\|_2. On the one hand, the problem is known to be NP\mathcal{NP}-hard which tells us that no polynomial-time algorithm exists unless P=NP\mathcal{P} = \mathcal{NP}. On the other hand, the best known algorithms for the problem do a brute-force search among NkN^k possibilities. In this work, we show that there are no better-than-brute-force algorithms, assuming any one of a variety of popular conjectures including the weighted kk-clique conjecture from the area of fine-grained complexity, or the hardness of the closest vector problem from the geometry of numbers. We also show the impossibility of better-than-brute-force algorithms when the prediction error is measured in other p\ell_p norms, assuming the strong exponential-time hypothesis.

View on arXiv
Comments on this paper