Information Extraction from Visually Rich Documents using LLM-based Organization of Documents into Independent Textual Segments

Information extraction (IE) from Visually Rich Documents (VRDs) containing layout features along with text is a critical and well-studied task. Specialized non-LLM NLP-based solutions typically involve training models using both textual and geometric information to label sequences/tokens as named entities or answers to specific questions. However, these approaches lack reasoning, are not able to infer values not explicitly present in documents, and do not generalize well to new formats. Generative LLM-based approaches proposed recently are capable of reasoning, but struggle to comprehend clues from document layout especially in previously unseen document formats, and do not show competitive performance in heterogeneous VRD benchmark datasets. In this paper, we propose BLOCKIE, a novel LLM-based approach that organizes VRDs into localized, reusable semantic textual segments called , which are processed independently. Through focused and more generalizable reasoning,our approach outperforms the state-of-the-art on public VRD benchmarks by 1-3% in F1 scores, is resilient to document formats previously not encountered and shows abilities to correctly extract information not explicitly present in documents.
View on arXiv@article{bhattacharyya2025_2505.13535, title={ Information Extraction from Visually Rich Documents using LLM-based Organization of Documents into Independent Textual Segments }, author={ Aniket Bhattacharyya and Anurag Tripathi and Ujjal Das and Archan Karmakar and Amit Pathak and Maneesh Gupta }, journal={arXiv preprint arXiv:2505.13535}, year={ 2025 } }