162

Combinatorial Bandits without Total Order for Arms

Inderjit S. Dhillon
Sujay Sanghavi
Abstract

We consider the combinatorial bandits problem, where at each time step, the online learner selects a size-kk subset ss from the arms set A\mathcal{A}, where A=n\left|\mathcal{A}\right| = n, and observes a stochastic reward of each arm in the selected set ss. The goal of the online learner is to minimize the regret, induced by not selecting ss^* which maximizes the expected total reward. Specifically, we focus on a challenging setting where 1) the reward distribution of an arm depends on the set ss it is part of, and crucially 2) there is \textit{no total order} for the arms in A\mathcal{A}. In this paper, we formally present a reward model that captures set-dependent reward distribution and assumes no total order for arms. Correspondingly, we propose an Upper Confidence Bound (UCB) algorithm that maintains UCB for each individual arm and selects the arms with top-kk UCB. We develop a novel regret analysis and show an O(k2nlogTϵ)O\left(\frac{k^2 n \log T}{\epsilon}\right) gap-dependent regret bound as well as an O(k2nTlogT)O\left(k^2\sqrt{n T \log T}\right) gap-independent regret bound. We also provide a lower bound for the proposed reward model, which shows our proposed algorithm is near-optimal for any constant kk. Empirical results on various reward models demonstrate the broad applicability of our algorithm.

View on arXiv
Comments on this paper