37
0

Extending Large Vision-Language Model for Diverse Interactive Tasks in Autonomous Driving

Abstract

The Large Visual-Language Models (LVLMs) have significantly advanced image understanding. Their comprehension and reasoning capabilities enable promising applications in autonomous driving scenarios. However, existing research typically focuses on front-view perspectives and partial objects within scenes, struggling to achieve comprehensive scene understanding. Meanwhile, existing LVLMs suffer from the lack of mapping relationship between 2D and 3D and insufficient integration of 3D object localization and instruction understanding. To tackle these limitations, we first introduce NuInteract, a large-scale dataset with over 1.5M multi-view image language pairs spanning dense scene captions and diverse interactive tasks. Furthermore, we propose DriveMonkey, a simple yet effective framework that seamlessly integrates LVLMs with a spatial processor using a series of learnable queries. The spatial processor, designed as a plug-and-play component, can be initialized with pre-trained 3D detectors to improve 3D perception. Our experiments show that DriveMonkey outperforms general LVLMs, especially achieving a 9.86% notable improvement on the 3D visual grounding task. The dataset and code will be released atthis https URL.

View on arXiv
@article{zhao2025_2505.08725,
  title={ Extending Large Vision-Language Model for Diverse Interactive Tasks in Autonomous Driving },
  author={ Zongchuang Zhao and Haoyu Fu and Dingkang Liang and Xin Zhou and Dingyuan Zhang and Hongwei Xie and Bing Wang and Xiang Bai },
  journal={arXiv preprint arXiv:2505.08725},
  year={ 2025 }
}
Comments on this paper