A New Algorithm for Non-stationary Contextual Bandits: Efficient, Optimal, and Parameter-free

We propose the first contextual bandit algorithm that is parameter-free, efficient, and optimal in terms of dynamic regret. Specifically, our algorithm achieves dynamic regret for a contextual bandit problem with rounds, switches and total variation in data distributions. Importantly, our algorithm is adaptive and does not need to know or ahead of time, and can be implemented efficiently assuming access to an ERM oracle. Our results strictly improve the bound of (Luo et al., 2018), and greatly generalize and improve the result of (Auer et al, 2018) that holds only for the two-armed bandit problem without contextual information. The key novelty of our algorithm is to introduce replay phases, in which the algorithm acts according to its previous decisions for a certain amount of time in order to detect non-stationarity while maintaining a good balance between exploration and exploitation.
View on arXiv