ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2103.02741
33
2

Combinatorial Bandits without Total Order for Arms

3 March 2021
Shuo Yang
Zhaolin Ren
Inderjit S. Dhillon
Sujay Sanghavi
ArXiv (abs)PDFHTML
Abstract

We consider the combinatorial bandits problem, where at each time step, the online learner selects a size-kkk subset sss from the arms set A\mathcal{A}A, where ∣A∣=n\left|\mathcal{A}\right| = n∣A∣=n, and observes a stochastic reward of each arm in the selected set sss. The goal of the online learner is to minimize the regret, induced by not selecting s∗s^*s∗ which maximizes the expected total reward. Specifically, we focus on a challenging setting where 1) the reward distribution of an arm depends on the set sss it is part of, and crucially 2) there is \textit{no total order} for the arms in A\mathcal{A}A. In this paper, we formally present a reward model that captures set-dependent reward distribution and assumes no total order for arms. Correspondingly, we propose an Upper Confidence Bound (UCB) algorithm that maintains UCB for each individual arm and selects the arms with top-kkk UCB. We develop a novel regret analysis and show an O(k2nlog⁡Tϵ)O\left(\frac{k^2 n \log T}{\epsilon}\right)O(ϵk2nlogT​) gap-dependent regret bound as well as an O(k2nTlog⁡T)O\left(k^2\sqrt{n T \log T}\right)O(k2nTlogT​) gap-independent regret bound. We also provide a lower bound for the proposed reward model, which shows our proposed algorithm is near-optimal for any constant kkk. Empirical results on various reward models demonstrate the broad applicability of our algorithm.

View on arXiv
Comments on this paper