ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1810.08164
94
10
v1v2v3v4v5v6v7 (latest)

A Unified Approach to Translate Classical Bandit Algorithms to the Structured Bandit Setting

18 October 2018
Samarth Gupta
Shreyas Chaudhari
Subhojyoti Mukherjee
Gauri Joshi
Osman Yağan
ArXiv (abs)PDFHTML
Abstract

We consider a finite-armed structured bandit problem in which mean rewards of different arms are functions of a common hidden parameter θ∗\theta^*θ∗. This problem setting subsumes several previously studied frameworks that assume linear or invertible reward functions. Our approach exploits the structure in the problem gradually and pulls only some of the sub-optimal arms O(log T) times, while other sub-optimal arms, termed as non-competitive, are pulled only O(1) times. Put differently, the set of non-competitive arms, which depend on the hidden parameter θ∗\theta^*θ∗, are stopped being pulled after some finite time. We show how this approach can be transformed into a general algorithm that can be coupled with any classic bandit strategy (UCB, Thompson Sampling, KL-UCB etc.), allowing them to be used in the structured bandit setting with substantial reductions in regret. In particular, we get bounded regret in several cases of practical interest where all sub-optimal arms are non-competitive. We also demonstrate the superiority of our algorithms over existing methods (including UCB-S) via experiments on the Movielens dataset.

View on arXiv
Comments on this paper