102
10

A Unified Approach to Translate Classical Bandit Algorithms to the Structured Bandit Setting

Abstract

We consider a finite-armed structured bandit problem in which mean rewards of different arms are functions of a common hidden parameter θ\theta^*. This problem setting subsumes several previously studied frameworks that assume linear or invertible reward functions. Our approach exploits the structure in the problem gradually and pulls only some of the sub-optimal arms O(log T) times, while other sub-optimal arms, termed as non-competitive, are pulled only O(1) times. Put differently, the set of non-competitive arms, which depend on the hidden parameter θ\theta^*, are stopped being pulled after some finite time. We show how this approach can be transformed into a general algorithm that can be coupled with any classic bandit strategy (UCB, Thompson Sampling, KL-UCB etc.), allowing them to be used in the structured bandit setting with substantial reductions in regret. In particular, we get bounded regret in several cases of practical interest where all sub-optimal arms are non-competitive. We also demonstrate the superiority of our algorithms over existing methods (including UCB-S) via experiments on the Movielens dataset.

View on arXiv
Comments on this paper