12
0

Frustratingly Simple Retrieval Improves Challenging, Reasoning-Intensive Benchmarks

Xinxi Lyu
Michael Duan
Rulin Shao
Pang Wei Koh
Sewon Min
Main:10 Pages
2 Figures
Bibliography:5 Pages
35 Tables
Appendix:18 Pages
Abstract

Retrieval-augmented Generation (RAG) has primarily been studied in limited settings, such as factoid question answering; more challenging, reasoning-intensive benchmarks have seen limited success from minimal RAG. In this work, we challenge this prevailing view on established, reasoning-intensive benchmarks: MMLU, MMLU Pro, AGI Eval, GPQA, and MATH. We identify a key missing component in prior work: a usable, web-scale datastore aligned with the breadth of pretraining data. To this end, we introduce CompactDS: a diverse, high-quality, web-scale datastore that achieves high retrieval accuracy and subsecond latency on a single-node. The key insights are (1) most web content can be filtered out without sacrificing coverage, and a compact, high-quality subset is sufficient; and (2) combining in-memory approximate nearest neighbor (ANN) retrieval and on-disk exact search balances speed and recall. Using CompactDS, we show that a minimal RAG pipeline achieves consistent accuracy improvements across all benchmarks and model sizes (8B--70B), with relative gains of 10% on MMLU, 33% on MMLU Pro, 14% on GPQA, and 19% on MATH. No single data source suffices alone, highlighting the importance of diversity of sources (web crawls, curated math, academic papers, textbooks). Finally, we show that our carefully designed in-house datastore matches or outperforms web search engines such as Google Search, as well as recently proposed, complex agent-based RAG systems--all while maintaining simplicity, reproducibility, and self-containment. We release CompactDS and our retrieval pipeline, supporting future research exploring retrieval-based AI systems.

View on arXiv
@article{lyu2025_2507.01297,
  title={ Frustratingly Simple Retrieval Improves Challenging, Reasoning-Intensive Benchmarks },
  author={ Xinxi Lyu and Michael Duan and Rulin Shao and Pang Wei Koh and Sewon Min },
  journal={arXiv preprint arXiv:2507.01297},
  year={ 2025 }
}
Comments on this paper