201
v1v2 (latest)

DriveGen3D: Boosting Feed-Forward Driving Scene Generation with Efficient Video Diffusion

Main:5 Pages
7 Figures
Bibliography:2 Pages
3 Tables
Abstract

We present DriveGen3D, a novel framework for generating high-quality and highly controllable dynamic 3D driving scenes that addresses critical limitations in existing methodologies. Current approaches to driving scene synthesis either suffer from prohibitive computational demands for extended temporal generation, focus exclusively on prolonged video synthesis without 3D representation, or restrict themselves to static single-scene reconstruction. Our work bridges this methodological gap by integrating accelerated long-term video generation with large-scale dynamic scene reconstruction through multimodal conditional control. DriveGen3D introduces a unified pipeline consisting of two specialized components: FastDrive-DiT, an efficient video diffusion transformer for high-resolution, temporally coherent video synthesis under text and Bird's-Eye-View (BEV) layout guidance; and FastRecon3D, a feed-forward module that rapidly builds 3D Gaussian representations across time, ensuring spatial-temporal consistency. DriveGen3D enable the generation of long driving videos (up to 800×424800\times424 at 1212 FPS) and corresponding 3D scenes, achieving state-of-the-art results while maintaining efficiency.

View on arXiv
Comments on this paper