179
v1v2 (latest)

Towards Agentic RAG with Deep Reasoning: A Survey of RAG-Reasoning Systems in LLMs

Yangning Li
Weizhi Zhang
Yuyao Yang
Wei-Chieh Huang
Yaozu Wu
Junyu Luo
Yuanchen Bei
Henry Peng Zou
Xiao Luo
Yusheng Zhao
Chunkit Chan
Yankai Chen
Zhongfen Deng
Yinghui Li
Hai-Tao Zheng
Dongyuan Li
Renhe Jiang
Ming Zhang
Yangqiu Song
Philip S. Yu
Main:11 Pages
2 Figures
Bibliography:8 Pages
7 Tables
Appendix:7 Pages
Abstract

Retrieval-Augmented Generation (RAG) lifts the factuality of Large Language Models (LLMs) by injecting external knowledge, yet it falls short on problems that demand multi-step inference; conversely, purely reasoning-oriented approaches often hallucinate or mis-ground facts. This survey synthesizes both strands under a unified reasoning-retrieval perspective. We first map how advanced reasoning optimizes each stage of RAG (Reasoning-Enhanced RAG). Then, we show how retrieved knowledge of different type supply missing premises and expand context for complex inference (RAG-Enhanced Reasoning). Finally, we spotlight emerging Synergized RAG-Reasoning frameworks, where (agentic) LLMs iteratively interleave search and reasoning to achieve state-of-the-art performance across knowledge-intensive benchmarks. We categorize methods, datasets, and open challenges, and outline research avenues toward deeper RAG-Reasoning systems that are more effective, multimodally-adaptive, trustworthy, and human-centric. The collection is available atthis https URL.

View on arXiv
Comments on this paper