In robot manipulation, Reinforcement Learning (RL) often suffers from low sample efficiency and uncertain convergence, especially in large observation and action spaces. Foundation Models (FMs) offer an alternative, demonstrating promise in zero-shot and few-shot settings. However, they can be unreliable due to limited physical and spatial understanding. We introduce ExploRLLM, a method that combines the strengths of both paradigms. In our approach, FMs improve RL convergence by generating policy code and efficient representations, while a residual RL agent compensates for the FMs' limited physical understanding. We show that ExploRLLM outperforms both policies derived from FMs and RL baselines in table-top manipulation tasks. Additionally, real-world experiments show that the policies exhibit promising zero-shot sim-to-real transfer. Supplementary material is available atthis https URL.
View on arXiv@article{ma2025_2403.09583, title={ ExploRLLM: Guiding Exploration in Reinforcement Learning with Large Language Models }, author={ Runyu Ma and Jelle Luijkx and Zlatan Ajanovic and Jens Kober }, journal={arXiv preprint arXiv:2403.09583}, year={ 2025 } }