24
1

Why Reasoning Matters? A Survey of Advancements in Multimodal Reasoning (v1)

Abstract

Reasoning is central to human intelligence, enabling structured problem-solving across diverse tasks. Recent advances in large language models (LLMs) have greatly enhanced their reasoning abilities in arithmetic, commonsense, and symbolic domains. However, effectively extending these capabilities into multimodal contexts-where models must integrate both visual and textual inputs-continues to be a significant challenge. Multimodal reasoning introduces complexities, such as handling conflicting information across modalities, which require models to adopt advanced interpretative strategies. Addressing these challenges involves not only sophisticated algorithms but also robust methodologies for evaluating reasoning accuracy and coherence. This paper offers a concise yet insightful overview of reasoning techniques in both textual and multimodal LLMs. Through a thorough and up-to-date comparison, we clearly formulate core reasoning challenges and opportunities, highlighting practical methods for post-training optimization and test-time inference. Our work provides valuable insights and guidance, bridging theoretical frameworks and practical implementations, and sets clear directions for future research.

View on arXiv
@article{bi2025_2504.03151,
  title={ Why Reasoning Matters? A Survey of Advancements in Multimodal Reasoning (v1) },
  author={ Jing Bi and Susan Liang and Xiaofei Zhou and Pinxin Liu and Junjia Guo and Yunlong Tang and Luchuan Song and Chao Huang and Guangyu Sun and Jinxi He and Jiarui Wu and Shu Yang and Daoan Zhang and Chen Chen and Lianggong Bruce Wen and Zhang Liu and Jiebo Luo and Chenliang Xu },
  journal={arXiv preprint arXiv:2504.03151},
  year={ 2025 }
}
Comments on this paper