19
15

An inexact subsampled proximal Newton-type method for large-scale machine learning

Abstract

We propose a fast proximal Newton-type algorithm for minimizing regularized finite sums that returns an ϵ\epsilon-suboptimal point in O~(d(n+κd)log(1ϵ))\tilde{\mathcal{O}}(d(n + \sqrt{\kappa d})\log(\frac{1}{\epsilon})) FLOPS, where nn is number of samples, dd is feature dimension, and κ\kappa is the condition number. As long as n>dn > d, the proposed method is more efficient than state-of-the-art accelerated stochastic first-order methods for non-smooth regularizers which requires O~(d(n+κn)log(1ϵ))\tilde{\mathcal{O}}(d(n + \sqrt{\kappa n})\log(\frac{1}{\epsilon})) FLOPS. The key idea is to form the subsampled Newton subproblem in a way that preserves the finite sum structure of the objective, thereby allowing us to leverage recent developments in stochastic first-order methods to solve the subproblem. Experimental results verify that the proposed algorithm outperforms previous algorithms for 1\ell_1-regularized logistic regression on real datasets.

View on arXiv
Comments on this paper