Non-Stationary Latent Auto-Regressive Bandits
We consider the stochastic multi-armed bandit problem with non-stationary rewards. We present a novel formulation of non-stationarity in the environment where changes in the mean reward of the arms over time are due to some unknown, latent, auto-regressive (AR) state of order . We call this new environment the latent AR bandit. Different forms of the latent AR bandit appear in many real-world settings, especially in emerging scientific fields such as behavioral health or education where there are few mechanistic models of the environment. If the AR order is known, we propose an algorithm that achieves regret in this setting. Empirically, our algorithm outperforms standard UCB across multiple non-stationary environments, even if is mis-specified.
View on arXiv