97
17
v1v2v3v4v5 (latest)

Randomized Exploration for Non-Stationary Stochastic Linear Bandits

Abstract

We investigate two perturbation approaches to overcome conservatism that optimism based algorithms chronically suffer from in practice. The first approach replaces optimism with a simple randomization when using confidence sets. The second one adds random perturbations to its current estimate before maximizing the expected reward. For non-stationary linear bandits, where each action is associated with a dd-dimensional feature and the unknown parameter is time-varying with total variation BTB_T, we propose two randomized algorithms, Discounted Randomized LinUCB (D-RandLinUCB) and Discounted Linear Thompson Sampling (D-LinTS) via the two perturbation approaches. We highlight the statistical optimality versus computational efficiency trade-off between them in that the former asymptotically achieves the optimal dynamic regret O~(d7/8BT1/4T3/4)\tilde{O}(d^{7/8} B_T^{1/4}T^{3/4}), but the latter is oracle-efficient with an extra logarithmic factor in the number of arms compared to minimax-optimal dynamic regret. In a simulation study, both algorithms show outstanding performance in tackling conservatism issue that Discounted LinUCB struggles with.

View on arXiv
Comments on this paper