ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.18738
45
107

A Review of 3D Object Detection with Vision-Language Models

25 April 2025
Ranjan Sapkota
Konstantinos I Roumeliotis
Rahul Harsha Cheppally
Marco Flores Calero
Manoj Karkee
    VLM
ArXivPDFHTML
Abstract

This review provides a systematic analysis of comprehensive survey of 3D object detection with vision-language models(VLMs) , a rapidly advancing area at the intersection of 3D vision and multimodal AI. By examining over 100 research papers, we provide the first systematic analysis dedicated to 3D object detection with vision-language models. We begin by outlining the unique challenges of 3D object detection with vision-language models, emphasizing differences from 2D detection in spatial reasoning and data complexity. Traditional approaches using point clouds and voxel grids are compared to modern vision-language frameworks like CLIP and 3D LLMs, which enable open-vocabulary detection and zero-shot generalization. We review key architectures, pretraining strategies, and prompt engineering methods that align textual and 3D features for effective 3D object detection with vision-language models. Visualization examples and evaluation benchmarks are discussed to illustrate performance and behavior. Finally, we highlight current challenges, such as limited 3D-language datasets and computational demands, and propose future research directions to advance 3D object detection with vision-language models. >Object Detection, Vision-Language Models, Agents, VLMs, LLMs, AI

View on arXiv
@article{sapkota2025_2504.18738,
  title={ A Review of 3D Object Detection with Vision-Language Models },
  author={ Ranjan Sapkota and Konstantinos I Roumeliotis and Rahul Harsha Cheppally and Marco Flores Calero and Manoj Karkee },
  journal={arXiv preprint arXiv:2504.18738},
  year={ 2025 }
}
Comments on this paper