31
0

SAPPHIRE: Preconditioned Stochastic Variance Reduction for Faster Large-Scale Statistical Learning

Abstract

Regularized empirical risk minimization (rERM) has become important in data-intensive fields such as genomics and advertising, with stochastic gradient methods typically used to solve the largest problems. However, ill-conditioned objectives and non-smooth regularizers undermine the performance of traditional stochastic gradient methods, leading to slow convergence and significant computational costs. To address these challenges, we propose the SAPPHIRE\texttt{SAPPHIRE} (S\textbf{S}ketching-based A\textbf{A}pproximations for P\textbf{P}roximal P\textbf{P}reconditioning and H\textbf{H}essian I\textbf{I}nexactness with Variance-RE\textbf{RE}educed Gradients) algorithm, which integrates sketch-based preconditioning to tackle ill-conditioning and uses a scaled proximal mapping to minimize the non-smooth regularizer. This stochastic variance-reduced algorithm achieves condition-number-free linear convergence to the optimum, delivering an efficient and scalable solution for ill-conditioned composite large-scale convex machine learning problems. Extensive experiments on lasso and logistic regression demonstrate that SAPPHIRE\texttt{SAPPHIRE} often converges 2020 times faster than other common choices such as Catalyst\texttt{Catalyst}, SAGA\texttt{SAGA}, and SVRG\texttt{SVRG}. This advantage persists even when the objective is non-convex or the preconditioner is infrequently updated, highlighting its robust and practical effectiveness.

View on arXiv
Comments on this paper