ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.04338
35
0

In-depth Analysis of Graph-based RAG in a Unified Framework

6 March 2025
Yingli Zhou
Yaodong Su
Youran Sun
Shu Wang
Taotao Wang
Runyuan He
Yongwei Zhang
Sicong Liang
Xilin Liu
Yuchi Ma
Yixiang Fang
ArXivPDFHTML
Abstract

Graph-based Retrieval-Augmented Generation (RAG) has proven effective in integrating external knowledge into large language models (LLMs), improving their factual accuracy, adaptability, interpretability, and trustworthiness. A number of graph-based RAG methods have been proposed in the literature. However, these methods have not been systematically and comprehensively compared under the same experimental settings. In this paper, we first summarize a unified framework to incorporate all graph-based RAG methods from a high-level perspective. We then extensively compare representative graph-based RAG methods over a range of questing-answering (QA) datasets -- from specific questions to abstract questions -- and examine the effectiveness of all methods, providing a thorough analysis of graph-based RAG approaches. As a byproduct of our experimental analysis, we are also able to identify new variants of the graph-based RAG methods over specific QA and abstract QA tasks respectively, by combining existing techniques, which outperform the state-of-the-art methods. Finally, based on these findings, we offer promising research opportunities. We believe that a deeper understanding of the behavior of existing methods can provide new valuable insights for future research.

View on arXiv
@article{zhou2025_2503.04338,
  title={ In-depth Analysis of Graph-based RAG in a Unified Framework },
  author={ Yingli Zhou and Yaodong Su and Youran Sun and Shu Wang and Taotao Wang and Runyuan He and Yongwei Zhang and Sicong Liang and Xilin Liu and Yuchi Ma and Yixiang Fang },
  journal={arXiv preprint arXiv:2503.04338},
  year={ 2025 }
}
Comments on this paper