ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.02477
35
0

Multimodal Fusion and Vision-Language Models: A Survey for Robot Vision

3 April 2025
Xiaofeng Han
Shunpeng Chen
Zenghuang Fu
Zhe Feng
Lue Fan
Dong An
Changwei Wang
Li Guo
Weiliang Meng
Xiaopeng Zhang
Rongtao Xu
Shibiao Xu
ArXivPDFHTML
Abstract

Robot vision has greatly benefited from advancements in multimodal fusion techniques and vision-language models (VLMs). We systematically review the applications of multimodal fusion in key robotic vision tasks, including semantic scene understanding, simultaneous localization and mapping (SLAM), 3D object detection, navigation and localization, and robot manipulation. We compare VLMs based on large language models (LLMs) with traditional multimodal fusion methods, analyzing their advantages, limitations, and synergies. Additionally, we conduct an in-depth analysis of commonly used datasets, evaluating their applicability and challenges in real-world robotic scenarios. Furthermore, we identify critical research challenges such as cross-modal alignment, efficient fusion strategies, real-time deployment, and domain adaptation, and propose future research directions, including self-supervised learning for robust multimodal representations, transformer-based fusion architectures, and scalable multimodal frameworks. Through a comprehensive review, comparative analysis, and forward-looking discussion, we provide a valuable reference for advancing multimodal perception and interaction in robotic vision. A comprehensive list of studies in this survey is available atthis https URL.

View on arXiv
@article{han2025_2504.02477,
  title={ Multimodal Fusion and Vision-Language Models: A Survey for Robot Vision },
  author={ Xiaofeng Han and Shunpeng Chen and Zenghuang Fu and Zhe Feng and Lue Fan and Dong An and Changwei Wang and Li Guo and Weiliang Meng and Xiaopeng Zhang and Rongtao Xu and Shibiao Xu },
  journal={arXiv preprint arXiv:2504.02477},
  year={ 2025 }
}
Comments on this paper