ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2404.12917
51
3

R3L: Relative Representations for Reinforcement Learning

19 April 2024
Antonio Pio Ricciardi
Valentino Maiorca
Luca Moschella
Riccardo Marin
Emanuele Rodolà
    AI4TS
ArXivPDFHTML
Abstract

Visual Reinforcement Learning is a popular and powerful framework that takes full advantage of the Deep Learning breakthrough. It is known that variations in input domains (e.g., different panorama colors due to seasonal changes) or task domains (e.g., altering the target speed of a car) can disrupt agent performance, necessitating new training for each variation. Recent advancements in the field of representation learning have demonstrated the possibility of combining components from different neural networks to create new models in a zero-shot fashion. In this paper, we build upon relative representations, a framework that maps encoder embeddings to a universal space. We adapt this framework to the Visual Reinforcement Learning setting, allowing to combine agents components to create new agents capable of effectively handling novel visual-task pairs not encountered during training. Our findings highlight the potential for model reuse, significantly reducing the need for retraining and, consequently, the time and computational resources required.

View on arXiv
@article{ricciardi2025_2404.12917,
  title={ R3L: Relative Representations for Reinforcement Learning },
  author={ Antonio Pio Ricciardi and Valentino Maiorca and Luca Moschella and Riccardo Marin and Emanuele Rodolà },
  journal={arXiv preprint arXiv:2404.12917},
  year={ 2025 }
}
Comments on this paper