28
1

Experiential Explanations for Reinforcement Learning

Abstract

Reinforcement learning (RL) systems can be complex and non-interpretable, making it challenging for non-AI experts to understand or intervene in their decisions. This is due in part to the sequential nature of RL in which actions are chosen because of their likelihood of obtaining future rewards. However, RL agents discard the qualitative features of their training, making it difficult to recover user-understandable information for "why" an action is chosen. We propose a technique Experiential Explanations to generate counterfactual explanations by training influence predictors along with the RL policy. Influence predictors are models that learn how different sources of reward affect the agent in different states, thus restoring information about how the policy reflects the environment. Two human evaluation studies revealed that participants presented with Experiential Explanations were better able to correctly guess what an agent would do than those presented with other standard types of explanation. Participants also found that Experiential Explanations are more understandable, satisfying, complete, useful, and accurate. Qualitative analysis provides information on the factors of Experiential Explanations that are most useful and the desired characteristics that participants seek from the explanations.

View on arXiv
@article{alabdulkarim2025_2210.04723,
  title={ Experiential Explanations for Reinforcement Learning },
  author={ Amal Alabdulkarim and Madhuri Singh and Gennie Mansi and Kaely Hall and Upol Ehsan and Mark O. Riedl },
  journal={arXiv preprint arXiv:2210.04723},
  year={ 2025 }
}
Comments on this paper