RLeXplore: Accelerating Research in Intrinsically-Motivated Reinforcement Learning

Extrinsic rewards can effectively guide reinforcement learning (RL) agents in specific tasks. However, extrinsic rewards frequently fall short in complex environments due to the significant human effort needed for their design and annotation. This limitation underscores the necessity for intrinsic rewards, which offer auxiliary and dense signals and can enable agents to learn in an unsupervised manner. Although various intrinsic reward formulations have been proposed, their implementation and optimization details are insufficiently explored and lack standardization, thereby hindering research progress. To address this gap, we introduce RLeXplore, a unified, highly modularized, and plug-and-play framework offering reliable implementations of eight state-of-the-art intrinsic reward methods. Furthermore, we conduct an in-depth study that identifies critical implementation details and establishes well-justified standard practices in intrinsically-motivated RL. Our documentation, examples, and source code are available atthis https URL.
View on arXiv@article{yuan2025_2405.19548, title={ RLeXplore: Accelerating Research in Intrinsically-Motivated Reinforcement Learning }, author={ Mingqi Yuan and Roger Creus Castanyer and Bo Li and Xin Jin and Wenjun Zeng and Glen Berseth }, journal={arXiv preprint arXiv:2405.19548}, year={ 2025 } }