104

Diffusion Approximations for Thompson Sampling

Main:41 Pages
Bibliography:3 Pages
1 Tables
Appendix:1 Pages
Abstract

We study the behavior of Thompson sampling from the perspective of weak convergence. In the regime where the gaps between arm means scale as 1/n1/\sqrt{n} with the time horizon nn, we show that the dynamics of Thompson sampling evolve according to discrete versions of SDEs and random ODEs. As nn \to \infty, we show that the dynamics converge weakly to solutions of the corresponding SDEs and random ODEs. (Recently, Wager and Xu (arXiv:2101.09855) independently proposed this regime and developed similar SDE and random ODE approximations for Thompson sampling in the multi-armed bandit setting.) Our weak convergence theory, which covers both multi-armed and linear bandit settings, is developed from first principles using the Continuous Mapping Theorem and can be directly adapted to analyze other sampling-based bandit algorithms, for example, algorithms using the bootstrap for exploration. We also establish an invariance principle for multi-armed bandits with gaps scaling as 1/n1/\sqrt{n} -- for Thompson sampling and related algorithms involving posterior approximation or the bootstrap, the weak diffusion limits are in general the same regardless of the specifics of the reward distributions or the choice of prior. In particular, as suggested by the classical Bernstein-von Mises normal approximation for posterior distributions, the weak diffusion limits generally coincide with the limit for normally-distributed rewards and priors.

View on arXiv
Comments on this paper