Exploiting Curvature in Online Convex Optimization with Delayed Feedback

In this work, we study the online convex optimization problem with curved losses and delayed feedback. When losses are strongly convex, existing approaches obtain regret bounds of order , where is the maximum delay and is the time horizon. However, in many cases, this guarantee can be much worse than as obtained by a delayed version of online gradient descent, where is the total delay. We bridge this gap by proposing a variant of follow-the-regularized-leader that obtains regret of order , where is the maximum number of missing observations. We then consider exp-concave losses and extend the Online Newton Step algorithm to handle delays with an adaptive learning rate tuning, achieving regret where is the dimension. To our knowledge, this is the first algorithm to achieve such a regret bound for exp-concave losses. We further consider the problem of unconstrained online linear regression and achieve a similar guarantee by designing a variant of the Vovk-Azoury-Warmuth forecaster with a clipping trick. Finally, we implement our algorithms and conduct experiments under various types of delay and losses, showing an improved performance over existing methods.
View on arXiv@article{qiu2025_2506.07595, title={ Exploiting Curvature in Online Convex Optimization with Delayed Feedback }, author={ Hao Qiu and Emmanuel Esposito and Mengxiao Zhang }, journal={arXiv preprint arXiv:2506.07595}, year={ 2025 } }