25

Dynamic Programming for Epistemic Uncertainty in Markov Decision Processes

Axel Benyamine
Julien Grand-Clément
Marek Petrik
Michael I. Jordan
Alain Durmus
Main:9 Pages
4 Figures
Bibliography:4 Pages
2 Tables
Appendix:26 Pages
Abstract

In this paper, we propose a general theory of ambiguity-averse MDPs, which treats the uncertain transition probabilities as random variables and evaluates a policy via a risk measure applied to its random return. This ambiguity-averse MDP framework unifies several models of MDPs with epistemic uncertainty for specific choices of risk measures. We extend the concepts of value functions and Bellman operators to our setting. Based on these objects, we establish the consequences of dynamic programming principles in this framework (existence of stationary policies, value and policy iteration algorithms), and we completely characterize law-invariant risk measures compatible with dynamic programming. Our work draws connections among several variants of MDP models and fully delineates what is possible under the dynamic programming paradigm and which risk measures require leaving it.

View on arXiv
Comments on this paper