On the Complexity of Value Iteration

Value iteration is a fundamental algorithm for solving Markov Decision Processes (MDPs). It computes the maximal -step payoff by iterating times a recurrence equation which is naturally associated to the MDP. At the same time, value iteration provides a policy for the MDP that is optimal on a given finite horizon . In this paper, we settle the computational complexity of value iteration. We show that, given a horizon in binary and an MDP, computing an optimal policy is EXP-complete, thus resolving an open problem that goes back to the seminal 1987 paper on the complexity of MDPs by Papadimitriou and Tsitsiklis. As a stepping stone, we show that it is EXP-complete to compute the -fold iteration (with in binary) of a function given by a straight-line program over the integers with and as operators.
View on arXiv