10
0

Exploiting Curvature in Online Convex Optimization with Delayed Feedback

Main:8 Pages
3 Figures
Bibliography:3 Pages
1 Tables
Appendix:21 Pages
Abstract

In this work, we study the online convex optimization problem with curved losses and delayed feedback. When losses are strongly convex, existing approaches obtain regret bounds of order dmaxlnTd_{\max} \ln T, where dmaxd_{\max} is the maximum delay and TT is the time horizon. However, in many cases, this guarantee can be much worse than dtot\sqrt{d_{\mathrm{tot}}} as obtained by a delayed version of online gradient descent, where dtotd_{\mathrm{tot}} is the total delay. We bridge this gap by proposing a variant of follow-the-regularized-leader that obtains regret of order min{σmaxlnT,dtot}\min\{\sigma_{\max}\ln T, \sqrt{d_{\mathrm{tot}}}\}, where σmax\sigma_{\max} is the maximum number of missing observations. We then consider exp-concave losses and extend the Online Newton Step algorithm to handle delays with an adaptive learning rate tuning, achieving regret min{dmaxnlnT,dtot}\min\{d_{\max} n\ln T, \sqrt{d_{\mathrm{tot}}}\} where nn is the dimension. To our knowledge, this is the first algorithm to achieve such a regret bound for exp-concave losses. We further consider the problem of unconstrained online linear regression and achieve a similar guarantee by designing a variant of the Vovk-Azoury-Warmuth forecaster with a clipping trick. Finally, we implement our algorithms and conduct experiments under various types of delay and losses, showing an improved performance over existing methods.

View on arXiv
@article{qiu2025_2506.07595,
  title={ Exploiting Curvature in Online Convex Optimization with Delayed Feedback },
  author={ Hao Qiu and Emmanuel Esposito and Mengxiao Zhang },
  journal={arXiv preprint arXiv:2506.07595},
  year={ 2025 }
}
Comments on this paper