Efficient and Adaptive Posterior Sampling Algorithms for Bandits

We study Thompson Sampling-based algorithms for stochastic bandits with bounded rewards. As the existing problem-dependent regret bound for Thompson Sampling with Gaussian priors [Agrawal and Goyal, 2017] is vacuous when , we derive a more practical bound that tightens the coefficient of the leading term %from to . Additionally, motivated by large-scale real-world applications that require scalability, adaptive computational resource allocation, and a balance in utility and computation, we propose two parameterized Thompson Sampling-based algorithms: Thompson Sampling with Model Aggregation (TS-MA-) and Thompson Sampling with Timestamp Duelling (TS-TD-), where controls the trade-off between utility and computation. Both algorithms achieve regret bound, where is the number of arms, is the finite learning horizon, and denotes the single round performance loss when pulling a sub-optimal arm.
View on arXiv