ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2102.05164
20
1

Nonstochastic Bandits with Infinitely Many Experts

9 February 2021
X. Meng
Tuhin Sarkar
M. Dahleh
    OffRL
ArXivPDFHTML
Abstract

We study the problem of nonstochastic bandits with expert advice, extending the setting from finitely many experts to any countably infinite set: A learner aims to maximize the total reward by taking actions sequentially based on bandit feedback while benchmarking against a set of experts. We propose a variant of Exp4.P that, for finitely many experts, enables inference of correct expert rankings while preserving the order of the regret upper bound. We then incorporate the variant into a meta-algorithm that works on infinitely many experts. We prove a high-probability upper bound of O~(i∗K+KT)\tilde{\mathcal{O}} \big( i^*K + \sqrt{KT} \big)O~(i∗K+KT​) on the regret, up to polylog factors, where i∗i^*i∗ is the unknown position of the best expert, KKK is the number of actions, and TTT is the time horizon. We also provide an example of structured experts and discuss how to expedite learning in such case. Our meta-learning algorithm achieves optimal regret up to polylog factors when i∗=O~(T/K)i^* = \tilde{\mathcal{O}} \big( \sqrt{T/K} \big)i∗=O~(T/K​). If a prior distribution is assumed to exist for i∗i^*i∗, the probability of optimality increases with TTT, the rate of which can be fast.

View on arXiv
Comments on this paper