Reflecting the greater significance of recent history over the distant past in non-stationary environments, -discounted regret has been introduced in online convex optimization (OCO) to gracefully forget past data as new information arrives. When the discount factor is given, online gradient descent with an appropriate step size achieves an discounted regret. However, the value of is often not predetermined in real-world scenarios. This gives rise to a significant open question: is it possible to develop a discounted algorithm that adapts to an unknown discount factor. In this paper, we affirmatively answer this question by providing a novel analysis to demonstrate that smoothed OGD (SOGD) achieves a uniform discounted regret, holding for all values of across a continuous interval simultaneously. The basic idea is to maintain multiple OGD instances to handle different discount factors, and aggregate their outputs sequentially by an online prediction algorithm named as Discounted-Normal-Predictor (DNP) (Kapralov and Panigrahy,2010). Our analysis reveals that DNP can combine the decisions of two experts, even when they operate on discounted regret with different discount factors.
View on arXiv@article{yang2025_2505.19491, title={ Discounted Online Convex Optimization: Uniform Regret Across a Continuous Interval }, author={ Wenhao Yang and Sifan Yang and Lijun Zhang }, journal={arXiv preprint arXiv:2505.19491}, year={ 2025 } }