ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2206.07908
60
8
v1v2 (latest)

Simultaneously Learning Stochastic and Adversarial Bandits with General Graph Feedback

16 June 2022
Fang-yuan Kong
Yichi Zhou
Shuai Li
ArXiv (abs)PDFHTML
Abstract

The problem of online learning with graph feedback has been extensively studied in the literature due to its generality and potential to model various learning tasks. Existing works mainly study the adversarial and stochastic feedback separately. If the prior knowledge of the feedback mechanism is unavailable or wrong, such specially designed algorithms could suffer great loss. To avoid this problem, \citet{erez2021towards} try to optimize for both environments. However, they assume the feedback graphs are undirected and each vertex has a self-loop, which compromises the generality of the framework and may not be satisfied in applications. With a general feedback graph, the observation of an arm may not be available when this arm is pulled, which makes the exploration more expensive and the algorithms more challenging to perform optimally in both environments. In this work, we overcome this difficulty by a new trade-off mechanism with a carefully-designed proportion for exploration and exploitation. We prove the proposed algorithm simultaneously achieves polylog⁡T\mathrm{poly} \log TpolylogT regret in the stochastic setting and minimax-optimal regret of O~(T2/3)\tilde{O}(T^{2/3})O~(T2/3) in the adversarial setting where TTT is the horizon and O~\tilde{O}O~ hides parameters independent of TTT as well as logarithmic terms. To our knowledge, this is the first best-of-both-worlds result for general feedback graphs.

View on arXiv
Comments on this paper