Strong NP-Hardness for Sparse Optimization with Concave Penalty Functions

Abstract
Consider the regularized sparse minimization problem, which involves empirical sums of loss functions for data points (each of dimension ) and a nonconvex sparsity penalty. We prove that finding an -optimal solution to the regularized sparse optimization problem is strongly NP-hard for any such that . The result applies to a broad class of loss functions and sparse penalty functions. It suggests that one cannot even approximately solve the sparse optimization problem in polynomial time, unless P NP.
View on arXivComments on this paper