ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1808.04008
71
34
v1v2v3 (latest)

PAC-Battling Bandits with Plackett-Luce: Tradeoff between Sample Complexity and Subset Size

12 August 2018
Aadirupa Saha
Aditya Gopalan
ArXiv (abs)PDFHTML
Abstract

We introduce the probably approximately correct (PAC) version of the problem of {Battling-bandits} with the Plackett-Luce (PL) model -- an online learning framework where in each trial, the learner chooses a subset of k≤nk \le nk≤n arms from a pool of fixed set of nnn arms, and subsequently observes a stochastic feedback indicating preference information over the items in the chosen subset; e.g., the most preferred item or ranking of the top mmm most preferred items etc. The objective is to recover an `approximate-best' item of the underlying PL model with high probability. This framework is motivated by practical settings such as recommendation systems and information retrieval, where it is easier and more efficient to collect relative feedback for multiple arms at once. Our framework can be seen as a generalization of the well-studied PAC-{Dueling-Bandit} problem over set of nnn arms. We propose two different feedback models: just the winner information (WI), and ranking of top-mmm items (TR), for any 2≤m≤k2\le m \le k2≤m≤k. We show that with just the winner information (WI), one cannot recover the `approximate-best' item with sample complexity lesser than Ω(nϵ2ln⁡1δ)\Omega\bigg( \frac{n}{\epsilon^2} \ln \frac{1}{\delta}\bigg)Ω(ϵ2n​lnδ1​), which is independent of kkk, and same as the one required for standard dueling bandit setting (k=2k=2k=2). However with top-mmm ranking (TR) feedback, our lower analysis proves an improved sample complexity guarantee of Ω(nmϵ2ln⁡1δ)\Omega\bigg( \frac{n}{m\epsilon^2} \ln \frac{1}{\delta}\bigg)Ω(mϵ2n​lnδ1​), which shows a relative improvement of 1m\frac{1}{m}m1​ factor compared to WI feedback, rightfully justifying the additional information gain due to the knowledge of ranking of topmost mmm items. We also provide algorithms for each of the above feedback models, our theoretical analyses proves the {optimality} of their sample complexities which matches the derived lower bounds (upto logarithmic factors).

View on arXiv
Comments on this paper