OMGM: Orchestrate Multiple Granularities and Modalities for Efficient Multimodal Retrieval

Vision-language retrieval-augmented generation (RAG) has become an effective approach for tackling Knowledge-Based Visual Question Answering (KB-VQA), which requires external knowledge beyond the visual content presented in images. The effectiveness of Vision-language RAG systems hinges on multimodal retrieval, which is inherently challenging due to the diverse modalities and knowledge granularities in both queries and knowledge bases. Existing methods have not fully tapped into the potential interplay between these elements. We propose a multimodal RAG system featuring a coarse-to-fine, multi-step retrieval that harmonizes multiple granularities and modalities to enhance efficacy. Our system begins with a broad initial search aligning knowledge granularity for cross-modal retrieval, followed by a multimodal fusion reranking to capture the nuanced multimodal information for top entity selection. A text reranker then filters out the most relevant fine-grained section for augmented generation. Extensive experiments on the InfoSeek and Encyclopedic-VQA benchmarks show our method achieves state-of-the-art retrieval performance and highly competitive answering results, underscoring its effectiveness in advancing KB-VQA systems.
View on arXiv@article{yang2025_2505.07879, title={ OMGM: Orchestrate Multiple Granularities and Modalities for Efficient Multimodal Retrieval }, author={ Wei Yang and Jingjing Fu and Rui Wang and Jinyu Wang and Lei Song and Jiang Bian }, journal={arXiv preprint arXiv:2505.07879}, year={ 2025 } }