ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2501.14729
235
0
v1v2v3 (latest)

HERMES: A Unified Self-Driving World Model for Simultaneous 3D Scene Understanding and Generation

13 March 2025
Xin Zhou
Dingkang Liang
Sifan Tu
Xiwu Chen
Yikang Ding
Dingyuan Zhang
Feiyang Tan
Hengshuang Zhao
Xiang Bai
    VGen
ArXiv (abs)PDFHTMLHuggingFace (1 upvotes)
Main:14 Pages
7 Figures
Bibliography:3 Pages
10 Tables
Abstract

Driving World Models (DWMs) have become essential for autonomous driving by enabling future scene prediction. However, existing DWMs are limited to scene generation and fail to incorporate scene understanding, which involves interpreting and reasoning about the driving environment. In this paper, we present a unified Driving World Model named HERMES. We seamlessly integrate 3D scene understanding and future scene evolution (generation) through a unified framework in driving scenarios. Specifically, HERMES leverages a Bird's-Eye View (BEV) representation to consolidate multi-view spatial information while preserving geometric relationships and interactions. We also introduce world queries, which incorporate world knowledge into BEV features via causal attention in the Large Language Model, enabling contextual enrichment for understanding and generation tasks. We conduct comprehensive studies on nuScenes and OmniDrive-nuScenes datasets to validate the effectiveness of our method. HERMES achieves state-of-the-art performance, reducing generation error by 32.4% and improving understanding metrics such as CIDEr by 8.0%. The model and code will be publicly released atthis https URL.

View on arXiv
Comments on this paper