Exact Penalty Methods for Non-Lipschitz Optimization
This paper considers a class of constrained optimization problems with a possibly nonconvex non-Lipschitz objective and a certain ellipsoidal constraint. Such a problem has a wide range of applications in data science. The objective induces the sparsity of solutions and the constraint presents the noise tolerance condition for data fitting. While the penalty method is a common approach for constrained optimization, there is little theory and algorithms concerning exact penalization for problems with nonconvex non-Lipschitz objectives. In this paper, we study the existence of exact penalty parameters for this problem regarding local minimizers, stationary points and -minimizers under suitable assumptions. Moreover, we propose a penalty method whose subproblems are solved via a proximal gradient method, with an update scheme for the penalty parameters. We also prove the convergence of the algorithm to a KKT point of the constrained problem. Preliminary numerical results show the efficiency of the penalty method for finding sparse solutions.
View on arXiv