A Convergence Result for Regularized Actor-Critic Methods

Abstract
In this paper, we present a probability one convergence proof, under suitable conditions, of a certain class of actor-critic algorithms for finding approximate solutions to entropy-regularized MDPs using the machinery of stochastic approximation. To obtain this overall result, we prove the convergence of policy evaluation with general regularizers when using linear approximation architectures and show convergence of entropy-regularized policy improvement.
View on arXivComments on this paper