ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2511.17502
157
0
v1v2 (latest)

RynnVLA-002: A Unified Vision-Language-Action and World Model

21 November 2025
Jun Cen
Siteng Huang
Yuqian Yuan
Kehan Li
Hangjie Yuan
Chaohui Yu
Yuming Jiang
J. Guo
Xin Li
Hao Luo
Fan Wang
Deli Zhao
H. Chen
    VGenSyDa
ArXiv (abs)PDFHTMLHuggingFace (22 upvotes)Github (593★)
Main:11 Pages
7 Figures
Bibliography:4 Pages
7 Tables
Appendix:1 Pages
Abstract

We introduce RynnVLA-002, a unified Vision-Language-Action (VLA) and world model. The world model leverages action and visual inputs to predict future image states, learning the underlying physics of the environment to refine action generation. Conversely, the VLA model produces subsequent actions from image observations, enhancing visual understanding and supporting the world model's image generation. The unified framework of RynnVLA-002 enables joint learning of environmental dynamics and action planning. Our experiments show that RynnVLA-002 surpasses individual VLA and world models, demonstrating their mutual enhancement. We evaluate RynnVLA-002 in both simulation and real-world robot tasks. RynnVLA-002 achieves 97.4% success rate on the LIBERO simulation benchmark without pretraining, while in real-world LeRobot experiments, its integrated world model boosts the overall success rate by 50%.

View on arXiv
Comments on this paper