ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2408.10381
46
0

Efficient Reinforcement Learning in Probabilistic Reward Machines

19 August 2024
Xiaofeng Lin
Xuezhou Zhang
ArXivPDFHTML
Abstract

In this paper, we study reinforcement learning in Markov Decision Processes with Probabilistic Reward Machines (PRMs), a form of non-Markovian reward commonly found in robotics tasks. We design an algorithm for PRMs that achieves a regret bound of O~(HOAT+H2O2A3/2+HT)\widetilde{O}(\sqrt{HOAT} + H^2O^2A^{3/2} + H\sqrt{T})O(HOAT​+H2O2A3/2+HT​), where HHH is the time horizon, OOO is the number of observations, AAA is the number of actions, and TTT is the number of time-steps. This result improves over the best-known bound, O~(HOAT)\widetilde{O}(H\sqrt{OAT})O(HOAT​) of \citet{pmlr-v206-bourel23a} for MDPs with Deterministic Reward Machines (DRMs), a special case of PRMs. When T≥H3O3A2T \geq H^3O^3A^2T≥H3O3A2 and OA≥HOA \geq HOA≥H, our regret bound leads to a regret of O~(HOAT)\widetilde{O}(\sqrt{HOAT})O(HOAT​), which matches the established lower bound of Ω(HOAT)\Omega(\sqrt{HOAT})Ω(HOAT​) for MDPs with DRMs up to a logarithmic factor. To the best of our knowledge, this is the first efficient algorithm for PRMs. Additionally, we present a new simulation lemma for non-Markovian rewards, which enables reward-free exploration for any non-Markovian reward given access to an approximate planner. Complementing our theoretical findings, we show through extensive experiment evaluations that our algorithm indeed outperforms prior methods in various PRM environments.

View on arXiv
Comments on this paper