19
17

RAGEval: Scenario Specific RAG Evaluation Dataset Generation Framework

Abstract

Retrieval-Augmented Generation (RAG) is a powerful approach that enables large language models (LLMs) to incorporate external knowledge. However, evaluating the effectiveness of RAG systems in specialized scenarios remains challenging due to the high costs of data construction and the lack of suitable evaluation metrics. This paper introduces RAGEval, a framework designed to assess RAG systems across diverse scenarios by generating high-quality documents, questions, answers, and references through a schema-based pipeline. With a focus on factual accuracy, we propose three novel metrics: Completeness, Hallucination, and Irrelevance to evaluate LLM generated responses rigorously. Experimental results show that RAGEval outperforms zero-shot and one-shot methods in terms of clarity, safety, conformity, and richness of generated samples. Furthermore, the use of LLMs for scoring the proposed metrics demonstrates a high level of consistency with human evaluations. RAGEval establishes a new paradigm for evaluating RAG systems in real-world applications. The code and dataset are released atthis https URL.

View on arXiv
@article{zhu2025_2408.01262,
  title={ RAGEval: Scenario Specific RAG Evaluation Dataset Generation Framework },
  author={ Kunlun Zhu and Yifan Luo and Dingling Xu and Yukun Yan and Zhenghao Liu and Shi Yu and Ruobing Wang and Shuo Wang and Yishan Li and Nan Zhang and Xu Han and Zhiyuan Liu and Maosong Sun },
  journal={arXiv preprint arXiv:2408.01262},
  year={ 2025 }
}
Comments on this paper