Learning meaningful abstract models of Markov Decision Processes (MDPs) is crucial for improving generalization from limited data. In this work, we show how geometric priors can be imposed on the low-dimensional representation manifold of a learned transition model. We incorporate known symmetric structures via appropriate choices of the latent space and the associated group actions, which encode prior knowledge about invariances in the environment. In addition, our framework allows the embedding of additional unstructured information alongside these symmetries. We show experimentally that this leads to better predictions of the latent transition model than fully unstructured approaches, as well as better learning on downstream RL tasks, in environments with rotational and translational features, including in first-person views of 3D environments. Additionally, our experiments show that this leads to simpler and more disentangled representations. The full code is available on GitHub to ensure reproducibility.
View on arXiv@article{delliaux2025_2506.01529, title={ Learning Abstract World Models with a Group-Structured Latent Space }, author={ Thomas Delliaux and Nguyen-Khanh Vu and Vincent François-Lavet and Elise van der Pol and Emmanuel Rachelson }, journal={arXiv preprint arXiv:2506.01529}, year={ 2025 } }