An inexact subsampled proximal Newton-type method for large-scale machine learning

We propose a fast proximal Newton-type algorithm for minimizing regularized finite sums that returns an -suboptimal point in FLOPS, where is number of samples, is feature dimension, and is the condition number. As long as , the proposed method is more efficient than state-of-the-art accelerated stochastic first-order methods for non-smooth regularizers which requires FLOPS. The key idea is to form the subsampled Newton subproblem in a way that preserves the finite sum structure of the objective, thereby allowing us to leverage recent developments in stochastic first-order methods to solve the subproblem. Experimental results verify that the proposed algorithm outperforms previous algorithms for -regularized logistic regression on real datasets.
View on arXiv