266

Perturbed-History Exploration in Stochastic Linear Bandits

Conference on Uncertainty in Artificial Intelligence (UAI), 2019
Abstract

We propose a new online algorithm for minimizing the cumulative regret in stochastic linear bandits. The key idea is to build a perturbed history, which mixes the history of observed rewards with a pseudo-history of randomly generated i.i.d. pseudo-rewards. Our algorithm, perturbed-history exploration in a linear bandit (LinPHE), estimates a linear model from its perturbed history and pulls the arm with the highest value under that model. We prove a O~(dn)\tilde{O}(d \sqrt{n}) gap-free bound on the expected nn-round regret of LinPHE, where dd is the number of features. Our analysis relies on novel concentration and anti-concentration bounds on the weighted sum of Bernoulli random variables. To show the generality of our design, we extend LinPHE to a logistic reward model. We evaluate both algorithms empirically and show that they are practical.

View on arXiv
Comments on this paper