Predictable Reinforcement Learning Dynamics through Entropy Rate Minimization

In Reinforcement Learning (RL), agents have no incentive to exhibit predictable behaviors, and are often pushed (through e.g. policy entropy regularisation) to randomise their actions in favor of exploration. This often makes it challenging for other agents and humans to predict an agent's behavior, triggering unsafe scenarios (e.g. in human-robot interaction). We propose a novel method to induce predictable behavior in RL agents, termed Predictability-Aware RL (PARL), employing the agent's trajectory entropy rate to quantify predictability. Our method maximizes a linear combination of a standard discounted reward and the negative entropy rate, thus trading off optimality with predictability. We show how the entropy rate can be formally cast as an average reward, how entropy-rate value functions can be estimated from a learned model and incorporate this in policy-gradient algorithms, and demonstrate how this approach produces predictable (near-optimal) policies in tasks inspired by human-robot use-cases.
View on arXiv@article{ornia2025_2311.18703, title={ Predictable Reinforcement Learning Dynamics through Entropy Rate Minimization }, author={ Daniel Jarne Ornia and Giannis Delimpaltadakis and Jens Kober and Javier Alonso-Mora }, journal={arXiv preprint arXiv:2311.18703}, year={ 2025 } }