ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.13587
52
2

Seeing the Future, Perceiving the Future: A Unified Driving World Model for Future Generation and Perception

17 March 2025
Dingkang Liang
Dingyuan Zhang
Xin Zhou
Sifan Tu
Tianrui Feng
Xiaofan Li
Yumeng Zhang
Mingyang Du
Xiao Tan
Xiang Bai
ArXivPDFHTML
Abstract

We present UniFuture, a simple yet effective driving world model that seamlessly integrates future scene generation and perception within a single framework. Unlike existing models focusing solely on pixel-level future prediction or geometric reasoning, our approach jointly models future appearance (i.e., RGB image) and geometry (i.e., depth), ensuring coherent predictions. Specifically, during the training, we first introduce a Dual-Latent Sharing scheme, which transfers image and depth sequence in a shared latent space, allowing both modalities to benefit from shared feature learning. Additionally, we propose a Multi-scale Latent Interaction mechanism, which facilitates bidirectional refinement between image and depth features at multiple spatial scales, effectively enhancing geometry consistency and perceptual alignment. During testing, our UniFuture can easily predict high-consistency future image-depth pairs by only using the current image as input. Extensive experiments on the nuScenes dataset demonstrate that UniFuture outperforms specialized models on future generation and perception tasks, highlighting the advantages of a unified, structurally-aware world model. The project page is atthis https URL.

View on arXiv
@article{liang2025_2503.13587,
  title={ Seeing the Future, Perceiving the Future: A Unified Driving World Model for Future Generation and Perception },
  author={ Dingkang Liang and Dingyuan Zhang and Xin Zhou and Sifan Tu and Tianrui Feng and Xiaofan Li and Yumeng Zhang and Mingyang Du and Xiao Tan and Xiang Bai },
  journal={arXiv preprint arXiv:2503.13587},
  year={ 2025 }
}
Comments on this paper