ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2410.06307
48
1
v1v2 (latest)

Model Predictive Control is Almost Optimal for Restless Bandit

8 October 2024
Nicolas Gast
Dheeraj Narasimha
ArXiv (abs)PDFHTML
Abstract

We consider the discrete time infinite horizon average reward restless markovian bandit (RMAB) problem. We propose a \emph{model predictive control} based non-stationary policy with a rolling computational horizon τ\tauτ. At each time-slot, this policy solves a τ\tauτ horizon linear program whose first control value is kept as a control for the RMAB. Our solution requires minimal assumptions and quantifies the loss in optimality in terms of τ\tauτ and the number of arms, NNN. We show that its sub-optimality gap is O(1/N)O(1/\sqrt{N})O(1/N​) in general, and exp⁡(−Ω(N))\exp(-\Omega(N))exp(−Ω(N)) under a local-stability condition. Our proof is based on a framework from dynamic control known as \emph{dissipativity}. Our solution easy to implement and performs very well in practice when compared to the state of the art. Further, both our solution and our proof methodology can easily be generalized to more general constrained MDP settings and should thus, be of great interest to the burgeoning RMAB community.

View on arXiv
@article{gast2025_2410.06307,
  title={ Model Predictive Control is Almost Optimal for Restless Bandit },
  author={ Nicolas Gast and Dheeraj Narasimha },
  journal={arXiv preprint arXiv:2410.06307},
  year={ 2025 }
}
Comments on this paper