A Temporal Difference Method for Stochastic Continuous Dynamics

For continuous systems modeled by dynamical equations such as ODEs and SDEs, Bellman's principle of optimality takes the form of the Hamilton-Jacobi-Bellman (HJB) equation, which provides the theoretical target of reinforcement learning (RL). Although recent advances in RL successfully leverage this formulation, the existing methods typically assume the underlying dynamics are known a priori because they need explicit access to the coefficient functions of dynamical equations to update the value function following the HJB equation. We address this inherent limitation of HJB-based RL; we propose a model-free approach still targeting the HJB equation and propose the corresponding temporal difference method. We demonstrate its potential advantages over transition kernel-based formulations, both qualitatively and empirically. The proposed formulation paves the way toward bridging stochastic optimal control and model-free reinforcement learning.
View on arXiv@article{settai2025_2505.15544, title={ A Temporal Difference Method for Stochastic Continuous Dynamics }, author={ Haruki Settai and Naoya Takeishi and Takehisa Yairi }, journal={arXiv preprint arXiv:2505.15544}, year={ 2025 } }