Text-rich document understanding (TDU) requires comprehensive analysis of documents containing substantial textual content and complex layouts. While Multimodal Large Language Models (MLLMs) have achieved fast progress in this domain, existing approaches either demand significant computational resources or struggle with effective multi-modal integration. In this paper, we introduce DocLayLLM, an efficient multi-modal extension of LLMs specifically designed for TDU. By lightly integrating visual patch tokens and 2D positional tokens into LLMs' input and encoding the document content using the LLMs themselves, we fully take advantage of the document comprehension capability of LLMs and enhance their perception of OCR information. We have also deeply considered the role of chain-of-thought (CoT) and innovatively proposed the techniques of CoT Pre-training and CoT Annealing. Our DocLayLLM can achieve remarkable performances with lightweight training settings, showcasing its efficiency and effectiveness. Experimental results demonstrate that our DocLayLLM outperforms existing OCR-dependent methods and OCR-free competitors. Code and model are available atthis https URL.
View on arXiv@article{liao2025_2408.15045, title={ DocLayLLM: An Efficient Multi-modal Extension of Large Language Models for Text-rich Document Understanding }, author={ Wenhui Liao and Jiapeng Wang and Hongliang Li and Chengyu Wang and Jun Huang and Lianwen Jin }, journal={arXiv preprint arXiv:2408.15045}, year={ 2025 } }