369
v1v2v3v4v5 (latest)

Provably Faster Gradient Descent via Long Steps

SIAM Journal on Optimization (SIOPT), 2023
Main:16 Pages
2 Figures
Bibliography:3 Pages
2 Tables
Appendix:1 Pages
Abstract

This work establishes new convergence guarantees for gradient descent in smooth convex optimization via a computer-assisted analysis technique. Our theory allows nonconstant stepsize policies with frequent long steps potentially violating descent by analyzing the overall effect of many iterations at once rather than the typical one-iteration inductions used in most first-order method analyses. We show that long steps, which may increase the objective value in the short term, lead to provably faster convergence in the long term. A conjecture towards proving a faster O(1/TlogT)O(1/T\log T) rate for gradient descent is also motivated along with simple numerical validation.

View on arXiv
Comments on this paper