ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2510.18313
88
0
v1v2v3v4 (latest)

OmniNWM: Omniscient Driving Navigation World Models

21 October 2025
Bohan Li
Zhuang Ma
Dalong Du
Baorui Peng
Zhujin Liang
Zhenqiang Liu
Chao Ma
Yueming Jin
Hao Zhao
Wenjun Zeng
Xin Jin
    VGen
ArXiv (abs)PDFHTMLHuggingFace (8 upvotes)Github (4★)
Main:8 Pages
19 Figures
Bibliography:5 Pages
7 Tables
Appendix:11 Pages
Abstract

Autonomous driving world models are expected to work effectively across three core dimensions: state, action, and reward. Existing models, however, are typically restricted to limited state modalities, short video sequences, imprecise action control, and a lack of reward awareness. In this paper, we introduce OmniNWM, an omniscient panoramic navigation world model that addresses all three dimensions within a unified framework. For state, OmniNWM jointly generates panoramic videos of RGB, semantics, metric depth, and 3D occupancy. A flexible forcing strategy enables high-quality long-horizon auto-regressive generation. For action, we introduce a normalized panoramic Plucker ray-map representation that encodes input trajectories into pixel-level signals, enabling highly precise and generalizable control over panoramic video generation. Regarding reward, we move beyond learning reward functions with external image-based models: instead, we leverage the generated 3D occupancy to directly define rule-based dense rewards for driving compliance and safety. Extensive experiments demonstrate that OmniNWM achieves state-of-the-art performance in video generation, control accuracy, and long-horizon stability, while providing a reliable closed-loop evaluation framework through occupancy-grounded rewards. Project page is available atthis https URL.

View on arXiv
Comments on this paper