Interpretable Learning Dynamics in Unsupervised Reinforcement Learning

We present an interpretability framework for unsupervised reinforcement learning (URL) agents, aimed at understanding how intrinsic motivation shapes attention, behavior, and representation learning. We analyze five agents DQN, RND, ICM, PPO, and a Transformer-RND variant trained on procedurally generated environments, using Grad-CAM, Layer-wise Relevance Propagation (LRP), exploration metrics, and latent space clustering. To capture how agents perceive and adapt over time, we introduce two metrics: attention diversity, which measures the spatial breadth of focus, and attention change rate, which quantifies temporal shifts in attention. Our findings show that curiosity-driven agents display broader, more dynamic attention and exploratory behavior than their extrinsically motivated counterparts. Among them, TransformerRND combines wide attention, high exploration coverage, and compact, structured latent representations. Our results highlight the influence of architectural inductive biases and training signals on internal agent dynamics. Beyond reward-centric evaluation, the proposed framework offers diagnostic tools to probe perception and abstraction in RL agents, enabling more interpretable and generalizable behavior.
View on arXiv@article{pandey2025_2505.06279, title={ Interpretable Learning Dynamics in Unsupervised Reinforcement Learning }, author={ Shashwat Pandey }, journal={arXiv preprint arXiv:2505.06279}, year={ 2025 } }