311

Enforcing Almost-Sure Reachability in POMDPs

Abstract

Partially-Observable Markov Decision Processes (POMDPs) are a well-known formal model for planning scenarios where agents operate under limited information about their environment. In safety-critical domains, the agent must adhere to a policy satisfying certain behavioral constraints. We study the problem of computing policies that almost-surely reach some goal state while a set of bad states is never visited. In particular, we present an iterative symbolic approach that computes a so-called winning region, that is, a set of system configurations such that all policies that stay within this set are guaranteed to satisfy the constraints. The empirical evaluation demonstrates the scalability and efficacy of our approach. In addition, we show the applicability to the safe exploration of POMDPs by restricting the agent behavior to these winning regions.

View on arXiv
Comments on this paper