ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2312.00561
25
3

A safe exploration approach to constrained Markov decision processes

1 December 2023
Tingting Ni
Maryam Kamgarpour
ArXivPDFHTML
Abstract

We consider discounted infinite-horizon constrained Markov decision processes (CMDPs), where the goal is to find an optimal policy that maximizes the expected cumulative reward while satisfying expected cumulative constraints. Motivated by the application of CMDPs in online learning for safety-critical systems, we focus on developing a model-free and \emph{simulator-free} algorithm that ensures \emph{constraint satisfaction during learning}. To this end, we employ the LB-SGD algorithm proposed in \cite{usmanova2022log}, which utilizes an interior-point approach based on the log-barrier function of the CMDP. Under the commonly assumed conditions of relaxed Fisher non-degeneracy and bounded transfer error in policy parameterization, we establish the theoretical properties of the LB-SGD algorithm. In particular, unlike existing CMDP approaches that ensure policy feasibility only upon convergence, the LB-SGD algorithm guarantees feasibility throughout the learning process and converges to the ε\varepsilonε-optimal policy with a sample complexity of O~(ε−6)\tilde{\mathcal{O}}(\varepsilon^{-6})O~(ε−6). Compared to the state-of-the-art policy gradient-based algorithm, C-NPG-PDA \cite{bai2022achieving2}, the LB-SGD algorithm requires an additional O(ε−2)\mathcal{O}(\varepsilon^{-2})O(ε−2) samples to ensure policy feasibility during learning with the same Fisher non-degenerate parameterization.

View on arXiv
@article{ni2025_2312.00561,
  title={ A safe exploration approach to constrained Markov decision processes },
  author={ Tingting Ni and Maryam Kamgarpour },
  journal={arXiv preprint arXiv:2312.00561},
  year={ 2025 }
}
Comments on this paper