ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1906.06594
149
42

The True Sample Complexity of Identifying Good Arms

15 June 2019
Julian Katz-Samuels
Kevin Jamieson
ArXiv (abs)PDFHTML
Abstract

We consider two multi-armed bandit problems with nnn arms: (i) given an ϵ>0\epsilon > 0ϵ>0, identify an arm with mean that is within ϵ\epsilonϵ of the largest mean and (ii) given a threshold μ0\mu_0μ0​ and integer kkk, identify kkk arms with means larger than μ0\mu_0μ0​. Existing lower bounds and algorithms for the PAC framework suggest that both of these problems require Ω(n)\Omega(n)Ω(n) samples. However, we argue that these definitions not only conflict with how these algorithms are used in practice, but also that these results disagree with intuition that says (i) requires only Θ(nm)\Theta(\frac{n}{m})Θ(mn​) samples where m=∣{i:μi>max⁡i∈[n]μi−ϵ}∣m = |\{ i : \mu_i > \max_{i \in [n]} \mu_i - \epsilon\}|m=∣{i:μi​>maxi∈[n]​μi​−ϵ}∣ and (ii) requires Θ(nmk)\Theta(\frac{n}{m}k)Θ(mn​k) samples where m=∣{i:μi>μ0}∣m = |\{ i : \mu_i > \mu_0 \}|m=∣{i:μi​>μ0​}∣. We provide definitions that formalize these intuitions, obtain lower bounds that match the above sample complexities, and develop explicit, practical algorithms that achieve nearly matching upper bounds.

View on arXiv
Comments on this paper