ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.02166
12
60

CrayonRobo: Object-Centric Prompt-Driven Vision-Language-Action Model for Robotic Manipulation

4 May 2025
Xiaoqi Li
Lingyun Xu
M. Zhang
Jiaming Liu
Yan Shen
Iaroslav Ponomarenko
Jiahui Xu
Liang Heng
Siyuan Huang
S. Zhang
Hao Dong
    LM&Ro
ArXivPDFHTML
Abstract

In robotic, task goals can be conveyed through various modalities, such as language, goal images, and goal videos. However, natural language can be ambiguous, while images or videos may offer overly detailed specifications. To tackle these challenges, we introduce CrayonRobo that leverages comprehensive multi-modal prompts that explicitly convey both low-level actions and high-level planning in a simple manner. Specifically, for each key-frame in the task sequence, our method allows for manual or automatic generation of simple and expressive 2D visual prompts overlaid on RGB images. These prompts represent the required task goals, such as the end-effector pose and the desired movement direction after contact. We develop a training strategy that enables the model to interpret these visual-language prompts and predict the corresponding contact poses and movement directions in SE(3) space. Furthermore, by sequentially executing all key-frame steps, the model can complete long-horizon tasks. This approach not only helps the model explicitly understand the task objectives but also enhances its robustness on unseen tasks by providing easily interpretable prompts. We evaluate our method in both simulated and real-world environments, demonstrating its robust manipulation capabilities.

View on arXiv
@article{li2025_2505.02166,
  title={ CrayonRobo: Object-Centric Prompt-Driven Vision-Language-Action Model for Robotic Manipulation },
  author={ Xiaoqi Li and Lingyun Xu and Mingxu Zhang and Jiaming Liu and Yan Shen and Iaroslav Ponomarenko and Jiahui Xu and Liang Heng and Siyuan Huang and Shanghang Zhang and Hao Dong },
  journal={arXiv preprint arXiv:2505.02166},
  year={ 2025 }
}
Comments on this paper