ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2403.09583
19
7

ExploRLLM: Guiding Exploration in Reinforcement Learning with Large Language Models

14 March 2024
Runyu Ma
Jelle Luijkx
Zlatan Ajanović
Jens Kober
    LM&Ro
    LRM
ArXivPDFHTML
Abstract

In robot manipulation, Reinforcement Learning (RL) often suffers from low sample efficiency and uncertain convergence, especially in large observation and action spaces. Foundation Models (FMs) offer an alternative, demonstrating promise in zero-shot and few-shot settings. However, they can be unreliable due to limited physical and spatial understanding. We introduce ExploRLLM, a method that combines the strengths of both paradigms. In our approach, FMs improve RL convergence by generating policy code and efficient representations, while a residual RL agent compensates for the FMs' limited physical understanding. We show that ExploRLLM outperforms both policies derived from FMs and RL baselines in table-top manipulation tasks. Additionally, real-world experiments show that the policies exhibit promising zero-shot sim-to-real transfer. Supplementary material is available atthis https URL.

View on arXiv
@article{ma2025_2403.09583,
  title={ ExploRLLM: Guiding Exploration in Reinforcement Learning with Large Language Models },
  author={ Runyu Ma and Jelle Luijkx and Zlatan Ajanovic and Jens Kober },
  journal={arXiv preprint arXiv:2403.09583},
  year={ 2025 }
}
Comments on this paper