ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2404.11578
24
1

LTL-Constrained Policy Optimization with Cycle Experience Replay

17 April 2024
Ameesh Shah
Cameron Voloshin
Chenxi Yang
Abhinav Verma
Swarat Chaudhuri
S. Seshia
ArXivPDFHTML
Abstract

Linear Temporal Logic (LTL) offers a precise means for constraining the behavior of reinforcement learning agents. However, in many settings where both satisfaction and optimality conditions are present, LTL is insufficient to capture both. Instead, LTL-constrained policy optimization, where the goal is to optimize a scalar reward under LTL constraints, is needed. This constrained optimization problem proves difficult in deep Reinforcement Learning (DRL) settings, where learned policies often ignore the LTL constraint due to the sparse nature of LTL satisfaction. To alleviate the sparsity issue, we introduce Cycle Experience Replay (CyclER), a novel reward shaping technique that exploits the underlying structure of the LTL constraint to guide a policy towards satisfaction by encouraging partial behaviors compliant with the constraint. We provide a theoretical guarantee that optimizing CyclER will achieve policies that satisfy the LTL constraint with near-optimal probability. We evaluate CyclER in three continuous control domains. Our experimental results show that optimizing CyclER in tandem with the existing scalar reward outperforms existing reward-shaping methods at finding performant LTL-satisfying policies.

View on arXiv
@article{shah2025_2404.11578,
  title={ LTL-Constrained Policy Optimization with Cycle Experience Replay },
  author={ Ameesh Shah and Cameron Voloshin and Chenxi Yang and Abhinav Verma and Swarat Chaudhuri and Sanjit A. Seshia },
  journal={arXiv preprint arXiv:2404.11578},
  year={ 2025 }
}
Comments on this paper