49

G2G^2-Reader: Dual Evolving Graphs for Multimodal Document QA

Yaxin Du
Junru Song
Yifan Zhou
Cheng Wang
Jiahao Gu
Zimeng Chen
Menglan Chen
Wen Yao
Yang Yang
Ying Wen
Siheng Chen
Main:3 Pages
9 Figures
11 Tables
Appendix:23 Pages
Abstract

Retrieval-augmented generation is a practical paradigm for question answering over long documents, but it remains brittle for multimodal reading where text, tables, and figures are interleaved across many pages. First, flat chunking breaks document-native structure and cross-modal alignment, yielding semantic fragments that are hard to interpret in isolation. Second, even iterative retrieval can fail in long contexts by looping on partial evidence or drifting into irrelevant sections as noise accumulates, since each step is guided only by the current snippet without a persistent global search state. We introduce G2G^2-Reader, a dual-graph system, to address both issues. It evolves a Content Graph to preserve document-native structure and cross-modal semantics, and maintains a Planning Graph, an agentic directed acyclic graph of sub-questions, to track intermediate findings and guide stepwise navigation for evidence completion. On VisDoMBench across five multimodal domains, G2G^2-Reader with Qwen3-VL-32B-Instruct reaches 66.21\% average accuracy, outperforming strong baselines and a standalone GPT-5 (53.08\%).

View on arXiv
Comments on this paper