48

Reinforcement Learning with ωω-Regular Objectives and Constraints

Main:5 Pages
3 Figures
Bibliography:3 Pages
Appendix:6 Pages
Abstract

Reinforcement learning (RL) commonly relies on scalar rewards with limited ability to express temporal, conditional, or safety-critical goals, and can lead to reward hacking. Temporal logic expressible via the more general class of ω\omega-regular objectives addresses this by precisely specifying rich behavioural properties. Even still, measuring performance by a single scalar (be it reward or satisfaction probability) masks safety-performance trade-offs that arise in settings with a tolerable level of risk.We address both limitations simultaneously by combining ω\omega-regular objectives with explicit constraints, allowing safety requirements and optimisation targets to be treated separately. We develop a model-based RL algorithm based on linear programming, which in the limit produces a policy maximising the probability of satisfying an ω\omega-regular objective while also adhering to ω\omega-regular constraints within specified thresholds. Furthermore, we establish a translation to constrained limit-average problems with optimality-preserving guarantees.

View on arXiv
Comments on this paper