Exploring and Evaluating Multimodal Knowledge Reasoning Consistency of Multimodal Large Language Models
In recent years, multimodal large language models (MLLMs) have achieved significant breakthroughs, enhancing understanding across text and vision. However, current MLLMs still face challenges in effectively integrating knowledge across these modalities during multimodal knowledge reasoning, leading to inconsistencies in reasoning outcomes. To systematically explore this issue, we propose four evaluation tasks and construct a new dataset. We conduct a series of experiments on this dataset to analyze and compare the extent of consistency degradation in multimodal knowledge reasoning within MLLMs. Based on the experimental results, we identify factors contributing to the observed degradation in consistency. Our research provides new insights into the challenges of multimodal knowledge reasoning and offers valuable guidance for future efforts aimed at improving MLLMs.
View on arXiv@article{jia2025_2503.04801, title={ Exploring and Evaluating Multimodal Knowledge Reasoning Consistency of Multimodal Large Language Models }, author={ Boyu Jia and Junzhe Zhang and Huixuan Zhang and Xiaojun Wan }, journal={arXiv preprint arXiv:2503.04801}, year={ 2025 } }