Position: Multimodal Large Language Models Can Significantly Advance Scientific Reasoning

Scientific reasoning, the process through which humans apply logic, evidence, and critical thinking to explore and interpret scientific phenomena, is essential in advancing knowledge reasoning across diverse fields. However, despite significant progress, current scientific reasoning models still struggle with generalization across domains and often fall short of multimodal perception. Multimodal Large Language Models (MLLMs), which integrate text, images, and other modalities, present an exciting opportunity to overcome these limitations and enhance scientific reasoning. Therefore, this position paper argues that MLLMs can significantly advance scientific reasoning across disciplines such as mathematics, physics, chemistry, and biology. First, we propose a four-stage research roadmap of scientific reasoning capabilities, and highlight the current state of MLLM applications in scientific reasoning, noting their ability to integrate and reason over diverse data types. Second, we summarize the key challenges that remain obstacles to achieving MLLM's full potential. To address these challenges, we propose actionable insights and suggestions for the future. Overall, our work offers a novel perspective on MLLM integration with scientific reasoning, providing the LLM community with a valuable vision for achieving Artificial General Intelligence (AGI).
View on arXiv@article{yan2025_2502.02871, title={ Position: Multimodal Large Language Models Can Significantly Advance Scientific Reasoning }, author={ Yibo Yan and Shen Wang and Jiahao Huo and Jingheng Ye and Zhendong Chu and Xuming Hu and Philip S. Yu and Carla Gomes and Bart Selman and Qingsong Wen }, journal={arXiv preprint arXiv:2502.02871}, year={ 2025 } }