Faster Acceleration for Steepest Descent
Annual Conference Computational Learning Theory (COLT), 2024
- ODL
Main:12 Pages
1 Figures
Bibliography:4 Pages
Appendix:13 Pages
Abstract
We propose a new accelerated first-order method for convex optimization under non-Euclidean smoothness assumptions. In contrast to standard acceleration techniques, our approach uses primal-dual iterate sequences taken with respect to differing norms, which are then coupled using an implicitly determined interpolation parameter. For norm smooth problems in dimensions, our method provides an iteration complexity improvement of up to in terms of calls to a first-order oracle, thereby allowing us to circumvent long-standing barriers in accelerated non-Euclidean steepest descent.
View on arXivComments on this paper
