ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.19417
80
6

Hi Robot: Open-Ended Instruction Following with Hierarchical Vision-Language-Action Models

26 February 2025
Lucy Xiaoyang Shi
Brian Ichter
Michael Equi
Liyiming Ke
Karl Pertsch
Q. Vuong
James Tanner
Anna Walling
Haohuan Wang
Niccolo Fusai
Adrian Li-Bell
Danny Driess
Lachy Groom
Sergey Levine
Chelsea Finn
    LM&Ro
    LRM
ArXivPDFHTML
Abstract

Generalist robots that can perform a range of different tasks in open-world settings must be able to not only reason about the steps needed to accomplish their goals, but also process complex instructions, prompts, and even feedback during task execution. Intricate instructions (e.g., "Could you make me a vegetarian sandwich?" or "I don't like that one") require not just the ability to physically perform the individual steps, but the ability to situate complex commands and feedback in the physical world. In this work, we describe a system that uses vision-language models in a hierarchical structure, first reasoning over complex prompts and user feedback to deduce the most appropriate next step to fulfill the task, and then performing that step with low-level actions. In contrast to direct instruction following methods that can fulfill simple commands ("pick up the cup"), our system can reason through complex prompts and incorporate situated feedback during task execution ("that's not trash"). We evaluate our system across three robotic platforms, including single-arm, dual-arm, and dual-arm mobile robots, demonstrating its ability to handle tasks such as cleaning messy tables, making sandwiches, and grocery shopping.

View on arXiv
@article{shi2025_2502.19417,
  title={ Hi Robot: Open-Ended Instruction Following with Hierarchical Vision-Language-Action Models },
  author={ Lucy Xiaoyang Shi and Brian Ichter and Michael Equi and Liyiming Ke and Karl Pertsch and Quan Vuong and James Tanner and Anna Walling and Haohuan Wang and Niccolo Fusai and Adrian Li-Bell and Danny Driess and Lachy Groom and Sergey Levine and Chelsea Finn },
  journal={arXiv preprint arXiv:2502.19417},
  year={ 2025 }
}
Comments on this paper