56
v1v2 (latest)

Newton-type Methods with the Proximal Gradient Step for Sparse Estimation

Abstract

In this paper, we propose new methods to efficiently solve convex optimization problems encountered in sparse estimation, which include a new quasi-Newton method that avoids computing the Hessian matrix and improves efficiency, and we prove its fast convergence. We also prove the local convergence of the Newton method under weaker assumptions. Our proposed methods offer a more efficient and effective approach, particularly for L1 regularization and group regularization problems, as they involve variable selection with each update. Through numerical experiments, we demonstrate the efficiency of our methods in solving problems encountered in sparse estimation. Our contributions include theoretical guarantees and practical applications for various problems.

View on arXiv
Comments on this paper