28
4

Second Order Path Variationals in Non-Stationary Online Learning

Abstract

We consider the problem of universal dynamic regret minimization under exp-concave and smooth losses. We show that appropriately designed Strongly Adaptive algorithms achieve a dynamic regret of O~(d2n1/5Cn2/5d2)\tilde O(d^2 n^{1/5} C_n^{2/5} \vee d^2), where nn is the time horizon and CnC_n a path variational based on second order differences of the comparator sequence. Such a path variational naturally encodes comparator sequences that are piecewise linear -- a powerful family that tracks a variety of non-stationarity patterns in practice (Kim et al, 2009). The aforementioned dynamic regret rate is shown to be optimal modulo dimension dependencies and poly-logarithmic factors of nn. Our proof techniques rely on analysing the KKT conditions of the offline oracle and requires several non-trivial generalizations of the ideas in Baby and Wang, 2021, where the latter work only leads to a slower dynamic regret rate of O~(d2.5n1/3Cn2/3d2.5)\tilde O(d^{2.5}n^{1/3}C_n^{2/3} \vee d^{2.5}) for the current problem.

View on arXiv
Comments on this paper