ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2409.16663
33
8

Mitigating Covariate Shift in Imitation Learning for Autonomous Vehicles Using Latent Space Generative World Models

25 September 2024
A. Popov
Alperen Degirmenci
David Wehr
Shashank Hegde
Ryan Oldja
A. Kamenev
Bertrand Douillard
David Nistér
Urs Muller
Ruchi Bhargava
Stan Birchfield
Nikolai Smolyanskiy
ArXivPDFHTML
Abstract

We propose the use of latent space generative world models to address the covariate shift problem in autonomous driving. A world model is a neural network capable of predicting an agent's next state given past states and actions. By leveraging a world model during training, the driving policy effectively mitigates covariate shift without requiring an excessive amount of training data. During end-to-end training, our policy learns how to recover from errors by aligning with states observed in human demonstrations, so that at runtime it can recover from perturbations outside the training distribution. Additionally, we introduce a novel transformer-based perception encoder that employs multi-view cross-attention and a learned scene query. We present qualitative and quantitative results, demonstrating significant improvements upon prior state of the art in closed-loop testing in the CARLA simulator, as well as showing the ability to handle perturbations in both CARLA and NVIDIA's DRIVE Sim.

View on arXiv
@article{popov2025_2409.16663,
  title={ Mitigating Covariate Shift in Imitation Learning for Autonomous Vehicles Using Latent Space Generative World Models },
  author={ Alexander Popov and Alperen Degirmenci and David Wehr and Shashank Hegde and Ryan Oldja and Alexey Kamenev and Bertrand Douillard and David Nistér and Urs Muller and Ruchi Bhargava and Stan Birchfield and Nikolai Smolyanskiy },
  journal={arXiv preprint arXiv:2409.16663},
  year={ 2025 }
}
Comments on this paper