ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2507.01264
8
0

LLM-based Realistic Safety-Critical Driving Video Generation

2 July 2025
Yongjie Fu
Ruijian Zha
Pei Tian
Xuan Di
ArXiv (abs)PDFHTML
Main:5 Pages
3 Figures
Bibliography:1 Pages
2 Tables
Abstract

Designing diverse and safety-critical driving scenarios is essential for evaluating autonomous driving systems. In this paper, we propose a novel framework that leverages Large Language Models (LLMs) for few-shot code generation to automatically synthesize driving scenarios within the CARLA simulator, which has flexibility in scenario scripting, efficient code-based control of traffic participants, and enforcement of realistic physical dynamics. Given a few example prompts and code samples, the LLM generates safety-critical scenario scripts that specify the behavior and placement of traffic participants, with a particular focus on collision events. To bridge the gap between simulation and real-world appearance, we integrate a video generation pipeline using Cosmos-Transfer1 with ControlNet, which converts rendered scenes into realistic driving videos. Our approach enables controllable scenario generation and facilitates the creation of rare but critical edge cases, such as pedestrian crossings under occlusion or sudden vehicle cut-ins. Experimental results demonstrate the effectiveness of our method in generating a wide range of realistic, diverse, and safety-critical scenarios, offering a promising tool for simulation-based testing of autonomous vehicles.

View on arXiv
@article{fu2025_2507.01264,
  title={ LLM-based Realistic Safety-Critical Driving Video Generation },
  author={ Yongjie Fu and Ruijian Zha and Pei Tian and Xuan Di },
  journal={arXiv preprint arXiv:2507.01264},
  year={ 2025 }
}
Comments on this paper