ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2202.00150
49
25

Learning Infinite-Horizon Average-Reward Markov Decision Processes with Constraints

31 January 2022
Liyu Chen
R. Jain
Haipeng Luo
ArXivPDFHTML
Abstract

We study regret minimization for infinite-horizon average-reward Markov Decision Processes (MDPs) under cost constraints. We start by designing a policy optimization algorithm with carefully designed action-value estimator and bonus term, and show that for ergodic MDPs, our algorithm ensures O~(T)\widetilde{O}(\sqrt{T})O(T​) regret and constant constraint violation, where TTT is the total number of time steps. This strictly improves over the algorithm of (Singh et al., 2020), whose regret and constraint violation are both O~(T2/3)\widetilde{O}(T^{2/3})O(T2/3). Next, we consider the most general class of weakly communicating MDPs. Through a finite-horizon approximation, we develop another algorithm with O~(T2/3)\widetilde{O}(T^{2/3})O(T2/3) regret and constraint violation, which can be further improved to O~(T)\widetilde{O}(\sqrt{T})O(T​) via a simple modification, albeit making the algorithm computationally inefficient. As far as we know, these are the first set of provable algorithms for weakly communicating MDPs with cost constraints.

View on arXiv
Comments on this paper