Anytime Acceleration of Gradient Descent
Annual Conference Computational Learning Theory (COLT), 2024
Main:13 Pages
2 Figures
Bibliography:2 Pages
Appendix:4 Pages
Abstract
This work investigates stepsize-based acceleration of gradient descent with {\em anytime} convergence guarantees. For smooth (non-strongly) convex optimization, we propose a stepsize schedule that allows gradient descent to achieve convergence guarantees of for any stopping time , where the stepsize schedule is predetermined without prior knowledge of the stopping time. This result provides an affirmative answer to a COLT open problem \citep{kornowski2024open} regarding whether stepsize-based acceleration can yield anytime convergence rates of . We further extend our theory to yield anytime convergence guarantees of for smooth and strongly convex optimization, with being the condition number.
View on arXivComments on this paper
