ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2011.02664
16
38

Restless-UCB, an Efficient and Low-complexity Algorithm for Online Restless Bandits

5 November 2020
Siwei Wang
Longbo Huang
John C. S. Lui
    OffRL
ArXivPDFHTML
Abstract

We study the online restless bandit problem, where the state of each arm evolves according to a Markov chain, and the reward of pulling an arm depends on both the pulled arm and the current state of the corresponding Markov chain. In this paper, we propose Restless-UCB, a learning policy that follows the explore-then-commit framework. In Restless-UCB, we present a novel method to construct offline instances, which only requires O(N)O(N)O(N) time-complexity (NNN is the number of arms) and is exponentially better than the complexity of existing learning policy. We also prove that Restless-UCB achieves a regret upper bound of O~((N+M3)T23)\tilde{O}((N+M^3)T^{2\over 3})O~((N+M3)T32​), where MMM is the Markov chain state space size and TTT is the time horizon. Compared to existing algorithms, our result eliminates the exponential factor (in M,NM,NM,N) in the regret upper bound, due to a novel exploitation of the sparsity in transitions in general restless bandit problems. As a result, our analysis technique can also be adopted to tighten the regret bounds of existing algorithms. Finally, we conduct experiments based on real-world dataset, to compare the Restless-UCB policy with state-of-the-art benchmarks. Our results show that Restless-UCB outperforms existing algorithms in regret, and significantly reduces the running time.

View on arXiv
Comments on this paper