14
4

Top KK Ranking for Multi-Armed Bandit with Noisy Evaluations

Abstract

We consider a multi-armed bandit setting where, at the beginning of each round, the learner receives noisy independent, and possibly biased, \emph{evaluations} of the true reward of each arm and it selects KK arms with the objective of accumulating as much reward as possible over TT rounds. Under the assumption that at each round the true reward of each arm is drawn from a fixed distribution, we derive different algorithmic approaches and theoretical guarantees depending on how the evaluations are generated. First, we show a O~(T2/3)\widetilde{O}(T^{2/3}) regret in the general case when the observation functions are a genearalized linear function of the true rewards. On the other hand, we show that an improved O~(T)\widetilde{O}(\sqrt{T}) regret can be derived when the observation functions are noisy linear functions of the true rewards. Finally, we report an empirical validation that confirms our theoretical findings, provides a thorough comparison to alternative approaches, and further supports the interest of this setting in practice.

View on arXiv
Comments on this paper