38
2

Deflated Dynamics Value Iteration

Abstract

The Value Iteration (VI) algorithm is an iterative procedure to compute the value function of a Markov decision process, and is the basis of many reinforcement learning (RL) algorithms as well. As the error convergence rate of VI as a function of iteration kk is O(γk)O(\gamma^k), it is slow when the discount factor γ\gamma is close to 11. To accelerate the computation of the value function, we propose Deflated Dynamics Value Iteration (DDVI). DDVI uses matrix splitting and matrix deflation techniques to effectively remove (deflate) the top ss dominant eigen-structure of the transition matrix Pπ\mathcal{P}^{\pi}. We prove that this leads to a O~(γkλs+1k)\tilde{O}(\gamma^k |\lambda_{s+1}|^k) convergence rate, where λs+1\lambda_{s+1}is (s+1)(s+1)-th largest eigenvalue of the dynamics matrix. We then extend DDVI to the RL setting and present Deflated Dynamics Temporal Difference (DDTD) algorithm. We empirically show the effectiveness of the proposed algorithms.

View on arXiv
Comments on this paper