ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2507.15493
316
46
v1v2 (latest)

GR-3 Technical Report

21 July 2025
Chilam Cheang
S. Chen
Zhongren Cui
Yingdong Hu
Liqun Huang
Tao Kong
Hang Li
Y. Li
Y. Liu
Xiao Ma
Hao Niu
Wenxuan Ou
Wanli Peng
Zeyu Ren
Haixin Shi
Jiawen Tian
Hongtao Wu
Xin Xiao
Yuyang Xiao
Jiafeng Xu
Yichu Yang
ArXiv (abs)PDFHTMLHuggingFace (41 upvotes)Github (6754★)
Main:15 Pages
10 Figures
Bibliography:5 Pages
1 Tables
Abstract

We report our recent progress towards building generalist robot policies, the development of GR-3. GR-3 is a large-scale vision-language-action (VLA) model. It showcases exceptional capabilities in generalizing to novel objects, environments, and instructions involving abstract concepts. Furthermore, it can be efficiently fine-tuned with minimal human trajectory data, enabling rapid and cost-effective adaptation to new settings. GR-3 also excels in handling long-horizon and dexterous tasks, including those requiring bi-manual manipulation and mobile movement, showcasing robust and reliable performance. These capabilities are achieved through a multi-faceted training recipe that includes co-training with web-scale vision-language data, efficient fine-tuning from human trajectory data collected via VR devices, and effective imitation learning with robot trajectory data. In addition, we introduce ByteMini, a versatile bi-manual mobile robot designed with exceptional flexibility and reliability, capable of accomplishing a wide range of tasks when integrated with GR-3. Through extensive real-world experiments, we show GR-3 surpasses the state-of-the-art baseline method, π0\pi_0π0​, on a wide variety of challenging tasks. We hope GR-3 can serve as a step towards building generalist robots capable of assisting humans in daily life.

View on arXiv
Comments on this paper