25
0

A Multi-Granularity Retrieval Framework for Visually-Rich Documents

Abstract

Retrieval-augmented generation (RAG) systems have predominantly focused on text-based retrieval, limiting their effectiveness in handling visually-rich documents that encompass text, images, tables, and charts. To bridge this gap, we propose a unified multi-granularity multimodal retrieval framework tailored for two benchmark tasks: MMDocIR and M2KR. Our approach integrates hierarchical encoding strategies, modality-aware retrieval mechanisms, and vision-language model (VLM)-based candidate filtering to effectively capture and utilize the complex interdependencies between textual and visual modalities. By leveraging off-the-shelf vision-language models and implementing a training-free hybrid retrieval strategy, our framework demonstrates robust performance without the need for task-specific fine-tuning. Experimental evaluations reveal that incorporating layout-aware search and VLM-based candidate verification significantly enhances retrieval accuracy, achieving a top performance score of 65.56. This work underscores the potential of scalable and reproducible solutions in advancing multimodal document retrieval systems.

View on arXiv
@article{xu2025_2505.01457,
  title={ A Multi-Granularity Retrieval Framework for Visually-Rich Documents },
  author={ Mingjun Xu and Zehui Wang and Hengxing Cai and Renxin Zhong },
  journal={arXiv preprint arXiv:2505.01457},
  year={ 2025 }
}
Comments on this paper