31
10

Strong NP-Hardness for Sparse Optimization with Concave Penalty Functions

Abstract

Consider the regularized sparse minimization problem, which involves empirical sums of loss functions for nn data points (each of dimension dd) and a nonconvex sparsity penalty. We prove that finding an O(nc1dc2)\mathcal{O}(n^{c_1}d^{c_2})-optimal solution to the regularized sparse optimization problem is strongly NP-hard for any c1,c2[0,1)c_1, c_2\in [0,1) such that c1+c2<1c_1+c_2<1. The result applies to a broad class of loss functions and sparse penalty functions. It suggests that one cannot even approximately solve the sparse optimization problem in polynomial time, unless P == NP.

View on arXiv
Comments on this paper