ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.22231
41
1

CoGen: 3D Consistent Video Generation via Adaptive Conditioning for Autonomous Driving

28 March 2025
Yishen Ji
Ziyue Zhu
Zhenxin Zhu
Kaixin Xiong
Ming Lu
Zhiqi Li
Lijun Zhou
Haiyang Sun
Bing Wang
Tong Lu
    VGen
ArXivPDFHTML
Abstract

Recent progress in driving video generation has shown significant potential for enhancing self-driving systems by providing scalable and controllable training data. Although pretrained state-of-the-art generation models, guided by 2D layout conditions (e.g., HD maps and bounding boxes), can produce photorealistic driving videos, achieving controllable multi-view videos with high 3D consistency remains a major challenge. To tackle this, we introduce a novel spatial adaptive generation framework, CoGen, which leverages advances in 3D generation to improve performance in two key aspects: (i) To ensure 3D consistency, we first generate high-quality, controllable 3D conditions that capture the geometry of driving scenes. By replacing coarse 2D conditions with these fine-grained 3D representations, our approach significantly enhances the spatial consistency of the generated videos. (ii) Additionally, we introduce a consistency adapter module to strengthen the robustness of the model to multi-condition control. The results demonstrate that this method excels in preserving geometric fidelity and visual realism, offering a reliable video generation solution for autonomous driving.

View on arXiv
@article{ji2025_2503.22231,
  title={ CoGen: 3D Consistent Video Generation via Adaptive Conditioning for Autonomous Driving },
  author={ Yishen Ji and Ziyue Zhu and Zhenxin Zhu and Kaixin Xiong and Ming Lu and Zhiqi Li and Lijun Zhou and Haiyang Sun and Bing Wang and Tong Lu },
  journal={arXiv preprint arXiv:2503.22231},
  year={ 2025 }
}
Comments on this paper