Multimodal Large Language Models for Text-rich Image Understanding: A Comprehensive Review
Pei Fu
Tongkun Guan
Zining Wang
Zhentao Guo
Chen Duan
Hao Sun
Boming Chen
Jiayao Ma
Qianyi Jiang
Kai Zhou
Junfeng Luo

Abstract
The recent emergence of Multi-modal Large Language Models (MLLMs) has introduced a new dimension to the Text-rich Image Understanding (TIU) field, with models demonstrating impressive and inspiring performance. However, their rapid evolution and widespread adoption have made it increasingly challenging to keep up with the latest advancements. To address this, we present a systematic and comprehensive survey to facilitate further research on TIU MLLMs. Initially, we outline the timeline, architecture, and pipeline of nearly all TIU MLLMs. Then, we review the performance of selected models on mainstream benchmarks. Finally, we explore promising directions, challenges, and limitations within the field.
View on arXiv@article{fu2025_2502.16586, title={ Multimodal Large Language Models for Text-rich Image Understanding: A Comprehensive Review }, author={ Pei Fu and Tongkun Guan and Zining Wang and Zhentao Guo and Chen Duan and Hao Sun and Boming Chen and Jiayao Ma and Qianyi Jiang and Kai Zhou and Junfeng Luo }, journal={arXiv preprint arXiv:2502.16586}, year={ 2025 } }
Comments on this paper