41
0

Generating Multimodal Driving Scenes via Next-Scene Prediction

Abstract

Generative models in Autonomous Driving (AD) enable diverse scene creation, yet existing methods fall short by only capturing a limited range of modalities, restricting the capability of generating controllable scenes for comprehensive evaluation of AD systems. In this paper, we introduce a multimodal generation framework that incorporates four major data modalities, including a novel addition of map modality. With tokenized modalities, our scene sequence generation framework autoregressively predicts each scene while managing computational demands through a two-stage approach. The Temporal AutoRegressive (TAR) component captures inter-frame dynamics for each modality while the Ordered AutoRegressive (OAR) component aligns modalities within each scene by sequentially predicting tokens in a fixed order. To maintain coherence between map and ego-action modalities, we introduce the Action-aware Map Alignment (AMA) module, which applies a transformation based on the ego-action to maintain coherence between these modalities. Our framework effectively generates complex, realistic driving scenes over extended sequences, ensuring multimodal consistency and offering fine-grained control over scene elements. Project page:this https URL

View on arXiv
@article{wu2025_2503.14945,
  title={ Generating Multimodal Driving Scenes via Next-Scene Prediction },
  author={ Yanhao Wu and Haoyang Zhang and Tianwei Lin and Lichao Huang and Shujie Luo and Rui Wu and Congpei Qiu and Wei Ke and Tong Zhang },
  journal={arXiv preprint arXiv:2503.14945},
  year={ 2025 }
}
Comments on this paper