ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.09423
59
0

Efficient Alignment of Unconditioned Action Prior for Language-conditioned Pick and Place in Clutter

12 March 2025
Kechun Xu
Xunlong Xia
Kaixuan Wang
Yifei Yang
Yunxuan Mao
Bing Deng
R. Xiong
Y. Wang
    OffRL
ArXivPDFHTML
Abstract

We study the task of language-conditioned pick and place in clutter, where a robot should grasp a target object in open clutter and move it to a specified place. Some approaches learn end-to-end policies with features from vision foundation models, requiring large datasets. Others combine foundation models in a zero-shot setting, suffering from cascading errors. In addition, they primarily leverage vision and language foundation models, focusing less on action priors. In this paper, we aim to develop an effective policy by integrating foundation priors from vision, language, and action. We propose A2^22, an action prior alignment method that aligns unconditioned action priors with 3D vision-language priors by learning one attention layer. The alignment formulation enables our policy to train with less data and preserve zero-shot generalization capabilities. We show that a shared policy for both pick and place actions enhances the performance for each task, and introduce a policy adaptation scheme to accommodate the multi-modal nature of actions. Extensive experiments in simulation and the real-world show that our policy achieves higher task success rates with fewer steps for both pick and place tasks in clutter, effectively generalizing to unseen objects and language instructions. Videos and codes are available atthis https URL.

View on arXiv
@article{xu2025_2503.09423,
  title={ Efficient Alignment of Unconditioned Action Prior for Language-conditioned Pick and Place in Clutter },
  author={ Kechun Xu and Xunlong Xia and Kaixuan Wang and Yifei Yang and Yunxuan Mao and Bing Deng and Rong Xiong and Yue Wang },
  journal={arXiv preprint arXiv:2503.09423},
  year={ 2025 }
}
Comments on this paper