We consider the combinatorial bandits problem, where at each time step, the online learner selects a size- subset from the arms set , where , and observes a stochastic reward of each arm in the selected set . The goal of the online learner is to minimize the regret, induced by not selecting which maximizes the expected total reward. Specifically, we focus on a challenging setting where 1) the reward distribution of an arm depends on the set it is part of, and crucially 2) there is \textit{no total order} for the arms in . In this paper, we formally present a reward model that captures set-dependent reward distribution and assumes no total order for arms. Correspondingly, we propose an Upper Confidence Bound (UCB) algorithm that maintains UCB for each individual arm and selects the arms with top- UCB. We develop a novel regret analysis and show an gap-dependent regret bound as well as an gap-independent regret bound. We also provide a lower bound for the proposed reward model, which shows our proposed algorithm is near-optimal for any constant . Empirical results on various reward models demonstrate the broad applicability of our algorithm.
View on arXiv