31
4

Distributed Adaptive Greedy Quasi-Newton Methods with Explicit Non-asymptotic Convergence Bounds

Abstract

Though quasi-Newton methods have been extensively studied in the literature, they either suffer from local convergence or use a series of line searches for global convergence which is not acceptable in the distributed setting. In this work, we first propose a line search free greedy quasi-Newton (GQN) method with adaptive steps and establish explicit non-asymptotic bounds for both the global convergence rate and local superlinear rate. Our novel idea lies in the design of multiple greedy quasi-Newton updates, which involves computing Hessian-vector products, to control the Hessian approximation error, and a simple mechanism to adjust stepsizes to ensure the objective function improvement per iterate. Then, we extend it to the master-worker framework and propose a distributed adaptive GQN method whose communication cost is comparable with that of first-order methods, yet it retains the superb convergence property of its centralized counterpart. Finally, we demonstrate the advantages of our methods via numerical experiments.

View on arXiv
Comments on this paper