36
4

On the Use of Non-Stationary Policies for Infinite-Horizon Discounted Markov Decision Processes

Abstract

We consider infinite-horizon γ\gamma-discounted Markov Decision Processes, for which it is known that there exists a stationary optimal policy. We consider the algorithm Value Iteration and the sequence of policies π1,...,πk\pi_1,...,\pi_k it implicitely generates until some iteration kk. We provide performance bounds for non-stationary policies involving the last mm generated policies that reduce the state-of-the-art bound for the last stationary policy πk\pi_k by a factor 1γ1γm\frac{1-\gamma}{1-\gamma^m}. In particular, the use of non-stationary policies allows to reduce the usual asymptotic performance bounds of Value Iteration with errors bounded by ϵ\epsilon at each iteration from γ(1γ)2ϵ\frac{\gamma}{(1-\gamma)^2}\epsilon to γ1γϵ\frac{\gamma}{1-\gamma}\epsilon, which is significant in the usual situation when γ\gamma is close to 1. Given Bellman operators that can only be computed with some error ϵ\epsilon, a surprising consequence of this result is that the problem of "computing an approximately optimal non-stationary policy" is much simpler than that of "computing an approximately optimal stationary policy", and even slightly simpler than that of "approximately computing the value of some fixed policy", since this last problem only has a guarantee of 11γϵ\frac{1}{1-\gamma}\epsilon.

View on arXiv
Comments on this paper