Proximal Newton-type methods for convex optimization
SIAM Journal on Optimization (SIOPT), 2012
Abstract
We seek to solve convex optimization problems in composite form: [\minimize_{x\in\R^n}\;f(x) := g(x) + h(x),] where is convex and continuously differentiable and is a convex but not necessarily differentiable function whose proximal mapping can be evaluated efficiently. We derive a generalization of Newton-type methods to handle such convex but nonsmooth objective functions. We prove such methods are globally convergent and achieve superlinear rates of convergence in the vicinity of an optimal solution. We also demonstrate the performance of these methods using problems of relevance in machine learning and statistics.
View on arXivComments on this paper
