Counterfactual Explanations for Continuous Action Reinforcement Learning

Reinforcement Learning (RL) has shown great promise in domains like healthcare and robotics but often struggles with adoption due to its lack of interpretability. Counterfactual explanations, which address "what if" scenarios, provide a promising avenue for understanding RL decisions but remain underexplored for continuous action spaces. We propose a novel approach for generating counterfactual explanations in continuous action RL by computing alternative action sequences that improve outcomes while minimizing deviations from the original sequence. Our approach leverages a distance metric for continuous actions and accounts for constraints such as adhering to predefined policies in specific states. Evaluations in two RL domains, Diabetes Control and Lunar Lander, demonstrate the effectiveness, efficiency, and generalization of our approach, enabling more interpretable and trustworthy RL applications.
View on arXiv@article{dong2025_2505.12701, title={ Counterfactual Explanations for Continuous Action Reinforcement Learning }, author={ Shuyang Dong and Shangtong Zhang and Lu Feng }, journal={arXiv preprint arXiv:2505.12701}, year={ 2025 } }