8
2

Sharpened Lazy Incremental Quasi-Newton Method

Abstract

The problem of minimizing the sum of nn functions in dd dimensions is ubiquitous in machine learning and statistics. In many applications where the number of observations nn is large, it is necessary to use incremental or stochastic methods, as their per-iteration cost is independent of nn. Of these, Quasi-Newton (QN) methods strike a balance between the per-iteration cost and the convergence rate. Specifically, they exhibit a superlinear rate with O(d2)O(d^2) cost in contrast to the linear rate of first-order methods with O(d)O(d) cost and the quadratic rate of second-order methods with O(d3)O(d^3) cost. However, existing incremental methods have notable shortcomings: Incremental Quasi-Newton (IQN) only exhibits asymptotic superlinear convergence. In contrast, Incremental Greedy BFGS (IGS) offers explicit superlinear convergence but suffers from poor empirical performance and has a per-iteration cost of O(d3)O(d^3). To address these issues, we introduce the Sharpened Lazy Incremental Quasi-Newton Method (SLIQN) that achieves the best of both worlds: an explicit superlinear convergence rate, and superior empirical performance at a per-iteration O(d2)O(d^2) cost. SLIQN features two key changes: first, it incorporates a hybrid strategy of using both classic and greedy BFGS updates, allowing it to empirically outperform both IQN and IGS. Second, it employs a clever constant multiplicative factor along with a lazy propagation strategy, which enables it to have a cost of O(d2)O(d^2). Additionally, our experiments demonstrate the superiority of SLIQN over other incremental and stochastic Quasi-Newton variants and establish its competitiveness with second-order incremental methods.

View on arXiv
Comments on this paper