13
13

On the Complexity of Value Iteration

Abstract

Value iteration is a fundamental algorithm for solving Markov Decision Processes (MDPs). It computes the maximal nn-step payoff by iterating nn times a recurrence equation which is naturally associated to the MDP. At the same time, value iteration provides a policy for the MDP that is optimal on a given finite horizon nn. In this paper, we settle the computational complexity of value iteration. We show that, given a horizon nn in binary and an MDP, computing an optimal policy is EXP-complete, thus resolving an open problem that goes back to the seminal 1987 paper on the complexity of MDPs by Papadimitriou and Tsitsiklis. As a stepping stone, we show that it is EXP-complete to compute the nn-fold iteration (with nn in binary) of a function given by a straight-line program over the integers with max\max and ++ as operators.

View on arXiv
Comments on this paper