16
20

Finite-time analysis of single-timescale actor-critic

Abstract

Actor-critic methods have achieved significant success in many challenging applications. However, its finite-time convergence is still poorly understood in the most practical single-timescale form. Existing works on analyzing single-timescale actor-critic have been limited to i.i.d. sampling or tabular setting for simplicity. We investigate the more practical online single-timescale actor-critic algorithm on continuous state space, where the critic assumes linear function approximation and updates with a single Markovian sample per actor step. Previous analysis has been unable to establish the convergence for such a challenging scenario. We demonstrate that the online single-timescale actor-critic method provably finds an ϵ\epsilon-approximate stationary point with O~(ϵ2)\widetilde{\mathcal{O}}(\epsilon^{-2}) sample complexity under standard assumptions, which can be further improved to O(ϵ2)\mathcal{O}(\epsilon^{-2}) under the i.i.d. sampling. Our novel framework systematically evaluates and controls the error propagation between the actor and critic. It offers a promising approach for analyzing other single-timescale reinforcement learning algorithms as well.

View on arXiv
Comments on this paper