54
0

LexRAG: Benchmarking Retrieval-Augmented Generation in Multi-Turn Legal Consultation Conversation

Abstract

Retrieval-augmented generation (RAG) has proven highly effective in improving large language models (LLMs) across various domains. However, there is no benchmark specifically designed to assess the effectiveness of RAG in the legal domain, which restricts progress in this area. To fill this gap, we propose LexRAG, the first benchmark to evaluate RAG systems for multi-turn legal consultations. LexRAG consists of 1,013 multi-turn dialogue samples and 17,228 candidate legal articles. Each sample is annotated by legal experts and consists of five rounds of progressive questioning. LexRAG includes two key tasks: (1) Conversational knowledge retrieval, requiring accurate retrieval of relevant legal articles based on multi-turn context. (2) Response generation, focusing on producing legally sound answers. To ensure reliable reproducibility, we develop LexiT, a legal RAG toolkit that provides a comprehensive implementation of RAG system components tailored for the legal domain. Additionally, we introduce an LLM-as-a-judge evaluation pipeline to enable detailed and effective assessment. Through experimental analysis of various LLMs and retrieval methods, we reveal the key limitations of existing RAG systems in handling legal consultation conversations. LexRAG establishes a new benchmark for the practical application of RAG systems in the legal domain, with its code and data available atthis https URL.

View on arXiv
@article{li2025_2502.20640,
  title={ LexRAG: Benchmarking Retrieval-Augmented Generation in Multi-Turn Legal Consultation Conversation },
  author={ Haitao Li and Yifan Chen and Yiran Hu and Qingyao Ai and Junjie Chen and Xiaoyu Yang and Jianhui Yang and Yueyue Wu and Zeyang Liu and Yiqun Liu },
  journal={arXiv preprint arXiv:2502.20640},
  year={ 2025 }
}
Comments on this paper