ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.18684
48
0

Efficient Continual Adaptation of Pretrained Robotic Policy with Online Meta-Learned Adapters

24 March 2025
Ruiqi Zhu
Endong Sun
Guanhe Huang
Oya Celiktutan
    CLL
    OnRL
ArXivPDFHTML
Abstract

Continual adaptation is essential for general autonomous agents. For example, a household robot pretrained with a repertoire of skills must still adapt to unseen tasks specific to each household. Motivated by this, building upon parameter-efficient fine-tuning in language models, prior works have explored lightweight adapters to adapt pretrained policies, which can preserve learned features from the pretraining phase and demonstrate good adaptation performances. However, these approaches treat task learning separately, limiting knowledge transfer between tasks. In this paper, we propose Online Meta-Learned adapters (OMLA). Instead of applying adapters directly, OMLA can facilitate knowledge transfer from previously learned tasks to current learning tasks through a novel meta-learning objective. Extensive experiments in both simulated and real-world environments demonstrate that OMLA can lead to better adaptation performances compared to the baseline methods. The project link:this https URL.

View on arXiv
@article{zhu2025_2503.18684,
  title={ Efficient Continual Adaptation of Pretrained Robotic Policy with Online Meta-Learned Adapters },
  author={ Ruiqi Zhu and Endong Sun and Guanhe Huang and Oya Celiktutan },
  journal={arXiv preprint arXiv:2503.18684},
  year={ 2025 }
}
Comments on this paper