The Multi-Armed Bandit Problem: An Efficient Non-Parametric Solution
Lai and Robbins (1985) and Lai (1987) provided efficient parametric solutions to the multi-armed bandit problem, showing that arm allocation via upper confidence bounds (UCB) achieves minimum regret. These bounds are constructed from the Kullback-Leibler information of the reward distributions, estimated from within a specified parametric family. In recent years there has been renewed interest in the multi-armed bandit problem due to new applications in machine learning algorithms and data analytics. Non-parametric arm allocation procedures like -greedy and Boltzmann exploration were studied, and modified versions of the UCB procedure were also analyzed under a non-parametric setting. However unlike UCB these non-parametric procedures are not efficient under a parametric setting. In this paper we propose a subsample comparison procedure that is non-parametric, but still efficient under parametric settings.
View on arXiv