Classical Policy Gradient: Preserving Bellman's Principle of Optimality

Abstract
We propose a new objective function for finite-horizon episodic Markov decision processes that better captures Bellman's principle of optimality, and provide an expression for the gradient of the objective.
View on arXivComments on this paper