ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1809.02707
47
13
v1v2 (latest)

Analysis of Thompson Sampling for Combinatorial Multi-armed Bandit with Probabilistically Triggered Arms

7 September 2018
Alihan Huyuk
Cem Tekin
ArXiv (abs)PDFHTML
Abstract

We analyze the regret of combinatorial Thompson sampling (CTS) for the combinatorial multi-armed bandit with probabilistically triggered arms under the semi-bandit feedback setting. We assume that the learner has access to an exact optimization oracle but does not know the expected base arm outcomes beforehand. When the expected reward function is Lipschitz continuous in the expected base arm outcomes, we derive O(∑i=1mlog⁡T/(piΔi))O(\sum_{i =1}^m \log T / (p_i \Delta_i))O(∑i=1m​logT/(pi​Δi​)) regret bound for CTS, where mmm denotes the number of base arms, pip_ipi​ denotes the minimum non-zero triggering probability of base arm iii and Δi\Delta_iΔi​ denotes the minimum suboptimality gap of base arm iii. We also compare CTS with combinatorial upper confidence bound (CUCB) via numerical experiments on a cascading bandit problem.

View on arXiv
Comments on this paper