18
0

Efficient and Adaptive Posterior Sampling Algorithms for Bandits

Abstract

We study Thompson Sampling-based algorithms for stochastic bandits with bounded rewards. As the existing problem-dependent regret bound for Thompson Sampling with Gaussian priors [Agrawal and Goyal, 2017] is vacuous when T288e64T \le 288 e^{64}, we derive a more practical bound that tightens the coefficient of the leading term %from 288e64288 e^{64} to 12701270. Additionally, motivated by large-scale real-world applications that require scalability, adaptive computational resource allocation, and a balance in utility and computation, we propose two parameterized Thompson Sampling-based algorithms: Thompson Sampling with Model Aggregation (TS-MA-α\alpha) and Thompson Sampling with Timestamp Duelling (TS-TD-α\alpha), where α[0,1]\alpha \in [0,1] controls the trade-off between utility and computation. Both algorithms achieve O(Klnα+1(T)/Δ)O \left(K\ln^{\alpha+1}(T)/\Delta \right) regret bound, where KK is the number of arms, TT is the finite learning horizon, and Δ\Delta denotes the single round performance loss when pulling a sub-optimal arm.

View on arXiv
Comments on this paper