ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.15672
57
1

VaViM and VaVAM: Autonomous Driving through Video Generative Modeling

24 February 2025
Florent Bartoccioni
Elias Ramzi
Victor Besnier
Shashanka Venkataramanan
Tuan-Hung Vu
Yihong Xu
Loick Chambon
Spyros Gidaris
Serkan Odabas
David Hurych
Renaud Marlet
Alexandre Boulch
Mickael Chen
Éloi Zablocki
Andrei Bursuc
Eduardo Valle
Matthieu Cord
    VGen
ArXivPDFHTML
Abstract

We explore the potential of large-scale generative video models for autonomous driving, introducing an open-source auto-regressive video model (VaViM) and its companion video-action model (VaVAM) to investigate how video pre-training transfers to real-world driving. VaViM is a simple auto-regressive video model that predicts frames using spatio-temporal token sequences. We show that it captures the semantics and dynamics of driving scenes. VaVAM, the video-action model, leverages the learned representations of VaViM to generate driving trajectories through imitation learning. Together, the models form a complete perception-to-action pipeline. We evaluate our models in open- and closed-loop driving scenarios, revealing that video-based pre-training holds promise for autonomous driving. Key insights include the semantic richness of the learned representations, the benefits of scaling for video synthesis, and the complex relationship between model size, data, and safety metrics in closed-loop evaluations. We release code and model weights atthis https URL

View on arXiv
@article{bartoccioni2025_2502.15672,
  title={ VaViM and VaVAM: Autonomous Driving through Video Generative Modeling },
  author={ Florent Bartoccioni and Elias Ramzi and Victor Besnier and Shashanka Venkataramanan and Tuan-Hung Vu and Yihong Xu and Loick Chambon and Spyros Gidaris and Serkan Odabas and David Hurych and Renaud Marlet and Alexandre Boulch and Mickael Chen and Éloi Zablocki and Andrei Bursuc and Eduardo Valle and Matthieu Cord },
  journal={arXiv preprint arXiv:2502.15672},
  year={ 2025 }
}
Comments on this paper