ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.06572
45
1

LawGPT: Knowledge-Guided Data Generation and Its Application to Legal LLM

10 February 2025
Zhi-Hua Zhou
Kun-Yang Yu
Shi-Yu Tian
Jiang-Xin Shi
Xiao-Wen Yang
Pengxiao Song
Yi-Xuan Jin
Lan-Zhe Guo
Yu-Feng Li
    ELM
    AILaw
ArXivPDFHTML
Abstract

Large language models (LLMs), both proprietary and open-source, have demonstrated remarkable capabilities across various natural language processing tasks. However, they face significant limitations in legal reasoning tasks. Proprietary models introduce data privacy risks and high inference costs, while open-source models underperform due to insufficient legal domain training data. To address these limitations, we study data generation for legal reasoning to improve the legal reasoning performance of open-source LLMs with the help of proprietary LLMs. This is challenging due to the lack of legal knowledge in proprietary LLMs and the difficulty in verifying the generated data. We propose KgDG, a knowledge-guided data generation framework for legal reasoning. Our framework enables leveraging legal knowledge to enhance generation diversity and introduces a refinement and verification process to ensure the quality of generated data. Moreover, we expand the generated dataset to further enhance the LLM reasoning capabilities. Using KgDG, we create a synthetic legal reasoning dataset containing 50K high-quality examples. Our trained model LawGPT outperforms existing legal-specific LLMs and achieves performance comparable to proprietary LLMs, demonstrating the effectiveness of KgDG and LawGPT. Our code and resources is publicly available atthis https URL.

View on arXiv
@article{zhou2025_2502.06572,
  title={ LawGPT: Knowledge-Guided Data Generation and Its Application to Legal LLM },
  author={ Zhi Zhou and Kun-Yang Yu and Shi-Yu Tian and Xiao-Wen Yang and Jiang-Xin Shi and Pengxiao Song and Yi-Xuan Jin and Lan-Zhe Guo and Yu-Feng Li },
  journal={arXiv preprint arXiv:2502.06572},
  year={ 2025 }
}
Comments on this paper