An Asymptotically Optimal Strategy for Constrained Multi-armed Bandit Problems

Abstract
For the stochastic multi-armed bandit (MAB) problem from a constrained model that generalizes the classical one, we show that an asymptotic optimality is achievable by a simple strategy extended from the -greedy strategy. We provide a finite-time lower bound on the probability of correct selection of an optimal near-feasible arm that holds for all time steps. Under some conditions, the bound approaches one as time goes to infinity. A particular example sequence of having the asymptotic convergence rate in the order of that holds from a sufficiently large is also discussed.
View on arXivComments on this paper