ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2207.06147
  4. Cited By
A Near-Optimal Primal-Dual Method for Off-Policy Learning in CMDP

A Near-Optimal Primal-Dual Method for Off-Policy Learning in CMDP

13 July 2022
Fan Chen
Junyu Zhang
Zaiwen Wen
    OffRL
ArXivPDFHTML

Papers citing "A Near-Optimal Primal-Dual Method for Off-Policy Learning in CMDP"

4 / 4 papers shown
Title
Temporal Logic Specification-Conditioned Decision Transformer for Offline Safe Reinforcement Learning
Temporal Logic Specification-Conditioned Decision Transformer for Offline Safe Reinforcement Learning
Zijian Guo
Weichao Zhou
Wenchao Li
OffRL
94
2
0
28 Jan 2025
SaVeR: Optimal Data Collection Strategy for Safe Policy Evaluation in
  Tabular MDP
SaVeR: Optimal Data Collection Strategy for Safe Policy Evaluation in Tabular MDP
Subhojyoti Mukherjee
Josiah P. Hanna
Robert Nowak
OffRL
37
0
0
04 Jun 2024
A Primal-Dual-Critic Algorithm for Offline Constrained Reinforcement
  Learning
A Primal-Dual-Critic Algorithm for Offline Constrained Reinforcement Learning
Kihyuk Hong
Yuhang Li
Ambuj Tewari
OffRL
18
7
0
13 Jun 2023
Achieving Zero Constraint Violation for Constrained Reinforcement
  Learning via Primal-Dual Approach
Achieving Zero Constraint Violation for Constrained Reinforcement Learning via Primal-Dual Approach
Qinbo Bai
Amrit Singh Bedi
Mridul Agarwal
Alec Koppel
Vaneet Aggarwal
99
56
0
13 Sep 2021
1