34
0

Structured Labeling Enables Faster Vision-Language Models for End-to-End Autonomous Driving

Main:7 Pages
5 Figures
Bibliography:2 Pages
5 Tables
Abstract

Vision-Language Models (VLMs) offer a promising approach to end-to-end autonomous driving due to their human-like reasoning capabilities. However, troublesome gaps remains between current VLMs and real-world autonomous driving applications. One major limitation is that existing datasets with loosely formatted language descriptions are not machine-friendly and may introduce redundancy. Additionally, high computational cost and massive scale of VLMs hinder the inference speed and real-world deployment. To bridge the gap, this paper introduces a structured and concise benchmark dataset, NuScenes-S, which is derived from the NuScenes dataset and contains machine-friendly structured representations. Moreover, we present FastDrive, a compact VLM baseline with 0.9B parameters. In contrast to existing VLMs with over 7B parameters and unstructured language processing(e.g., LLaVA-1.5), FastDrive understands structured and concise descriptions and generates machine-friendly driving decisions with high efficiency. Extensive experiments show that FastDrive achieves competitive performance on structured dataset, with approximately 20% accuracy improvement on decision-making tasks, while surpassing massive parameter baseline in inference speed with over 10x speedup. Additionally, ablation studies further focus on the impact of scene annotations (e.g., weather, time of day) on decision-making tasks, demonstrating their importance on decision-making tasks in autonomous driving.

View on arXiv
@article{jiang2025_2506.05442,
  title={ Structured Labeling Enables Faster Vision-Language Models for End-to-End Autonomous Driving },
  author={ Hao Jiang and Chuan Hu and Yukang Shi and Yuan He and Ke Wang and Xi Zhang and Zhipeng Zhang },
  journal={arXiv preprint arXiv:2506.05442},
  year={ 2025 }
}
Comments on this paper