ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2205.14327
11
18

Efficient Policy Iteration for Robust Markov Decision Processes via Regularization

28 May 2022
Navdeep Kumar
Kfir Y. Levy
Kaixin Wang
Shie Mannor
ArXivPDFHTML
Abstract

Robust Markov decision processes (MDPs) provide a general framework to model decision problems where the system dynamics are changing or only partially known. Efficient methods for some \texttt{sa}-rectangular robust MDPs exist, using its equivalence with reward regularized MDPs, generalizable to online settings. In comparison to \texttt{sa}-rectangular robust MDPs, \texttt{s}-rectangular robust MDPs are less restrictive but much more difficult to deal with. Interestingly, recent works have established the equivalence between \texttt{s}-rectangular robust MDPs and policy regularized MDPs. But we don't have a clear understanding to exploit this equivalence, to do policy improvement steps to get the optimal value function or policy. We don't have a clear understanding of greedy/optimal policy except it can be stochastic. There exist no methods that can naturally be generalized to model-free settings. We show a clear and explicit equivalence between \texttt{s}-rectangular LpL_pLp​ robust MDPs and policy regularized MDPs that resemble very much policy entropy regularized MDPs widely used in practice. Further, we dig into the policy improvement step and concretely derive optimal robust Bellman operators for \texttt{s}-rectangular LpL_pLp​ robust MDPs. We find that the greedy/optimal policies in \texttt{s}-rectangular LpL_pLp​ robust MDPs are threshold policies that play top kkk actions whose QQQ value is greater than some threshold (value), proportional to the (p−1)(p-1)(p−1)th power of its advantage. In addition, we show time complexity of (\texttt{sa} and \texttt{s}-rectangular) LpL_pLp​ robust MDPs is the same as non-robust MDPs up to some log factors. Our work greatly extends the existing understanding of \texttt{s}-rectangular robust MDPs and naturally generalizable to online settings.

View on arXiv
Comments on this paper