ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.18071
34
1

Mind with Eyes: from Language Reasoning to Multimodal Reasoning

23 March 2025
Zhiyu Lin
Yifei Gao
Xian Zhao
Yunfan Yang
Jitao Sang
    LRM
ArXivPDFHTML
Abstract

Language models have recently advanced into the realm of reasoning, yet it is through multimodal reasoning that we can fully unlock the potential to achieve more comprehensive, human-like cognitive capabilities. This survey provides a systematic overview of the recent multimodal reasoning approaches, categorizing them into two levels: language-centric multimodal reasoning and collaborative multimodal reasoning. The former encompasses one-pass visual perception and active visual perception, where vision primarily serves a supporting role in language reasoning. The latter involves action generation and state update within reasoning process, enabling a more dynamic interaction between modalities. Furthermore, we analyze the technical evolution of these methods, discuss their inherent challenges, and introduce key benchmark tasks and evaluation metrics for assessing multimodal reasoning performance. Finally, we provide insights into future research directions from the following two perspectives: (i) from visual-language reasoning to omnimodal reasoning and (ii) from multimodal reasoning to multimodal agents. This survey aims to provide a structured overview that will inspire further advancements in multimodal reasoning research.

View on arXiv
@article{lin2025_2503.18071,
  title={ Mind with Eyes: from Language Reasoning to Multimodal Reasoning },
  author={ Zhiyu Lin and Yifei Gao and Xian Zhao and Yunfan Yang and Jitao Sang },
  journal={arXiv preprint arXiv:2503.18071},
  year={ 2025 }
}
Comments on this paper