ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.22899
14
0

On the Dynamic Regret of Following the Regularized Leader: Optimism with History Pruning

28 May 2025
N. Mhaisen
George Iosifidis
ArXiv (abs)PDFHTML
Main:10 Pages
2 Figures
Bibliography:3 Pages
Appendix:13 Pages
Abstract

We revisit the Follow the Regularized Leader (FTRL) framework for Online Convex Optimization (OCO) over compact sets, focusing on achieving dynamic regret guarantees. Prior work has highlighted the framework's limitations in dynamic environments due to its tendency to produce "lazy" iterates. However, building on insights showing FTRL's ability to produce "agile" iterates, we show that it can indeed recover known dynamic regret bounds through optimistic composition of future costs and careful linearization of past costs, which can lead to pruning some of them. This new analysis of FTRL against dynamic comparators yields a principled way to interpolate between greedy and agile updates and offers several benefits, including refined control over regret terms, optimism without cyclic dependence, and the application of minimal recursive regularization akin to AdaFTRL. More broadly, we show that it is not the lazy projection style of FTRL that hinders (optimistic) dynamic regret, but the decoupling of the algorithm's state (linearized history) from its iterates, allowing the state to grow arbitrarily. Instead, pruning synchronizes these two when necessary.

View on arXiv
@article{mhaisen2025_2505.22899,
  title={ On the Dynamic Regret of Following the Regularized Leader: Optimism with History Pruning },
  author={ Naram Mhaisen and George Iosifidis },
  journal={arXiv preprint arXiv:2505.22899},
  year={ 2025 }
}
Comments on this paper