381
v1v2 (latest)

Accelerating Value Iteration with Anchoring

Neural Information Processing Systems (NeurIPS), 2023
Abstract

Value Iteration (VI) is foundational to the theory and practice of modern reinforcement learning, and it is known to converge at a O(γk)\mathcal{O}(\gamma^k)-rate, where γ\gamma is the discount factor. Surprisingly, however, the optimal rate for the VI setup was not known, and finding a general acceleration mechanism has been an open problem. In this paper, we present the first accelerated VI for both the Bellman consistency and optimality operators. Our method, called Anc-VI, is based on an \emph{anchoring} mechanism (distinct from Nesterov's acceleration), and it reduces the Bellman error faster than standard VI. In particular, Anc-VI exhibits a O(1/k)\mathcal{O}(1/k)-rate for γ1\gamma\approx 1 or even γ=1\gamma=1, while standard VI has rate O(1)\mathcal{O}(1) for γ11/k\gamma\ge 1-1/k, where kk is the iteration count. We also provide a complexity lower bound matching the upper bound up to a constant factor of 44, thereby establishing optimality of the accelerated rate of Anc-VI. Finally, we show that the anchoring mechanism provides the same benefit in the approximate VI and Gauss--Seidel VI setups as well.

View on arXiv
Comments on this paper