8
1

Replicability is Asymptotically Free in Multi-armed Bandits

Abstract

We consider a replicable stochastic multi-armed bandit algorithm that ensures, with high probability, that the algorithm's sequence of actions is not affected by the randomness inherent in the dataset. Replicability allows third parties to reproduce published findings and assists the original researcher in applying standard statistical tests. We observe that existing algorithms require O(K2/ρ2)O(K^2/\rho^2) times more regret than nonreplicable algorithms, where KK is the number of arms and ρ\rho is the level of nonreplication. However, we demonstrate that this additional cost is unnecessary when the time horizon TT is sufficiently large for a given K,ρK, \rho, provided that the magnitude of the confidence bounds is chosen carefully. Therefore, for a large TT, our algorithm only suffers K2/ρ2K^2/\rho^2 times smaller amount of exploration than existing algorithms. To ensure the replicability of the proposed algorithms, we incorporate randomness into their decision-making processes. We propose a principled approach to limiting the probability of nonreplication. This approach elucidates the steps that existing research has implicitly followed. Furthermore, we derive the first lower bound for the two-armed replicable bandit problem, which implies the optimality of the proposed algorithms up to a loglogT\log\log T factor for the two-armed case.

View on arXiv
Comments on this paper