SynLexLM: Scaling Legal LLMs with Synthetic Data and Curriculum Learning

Large Language Models (LLMs) are powerful but often require extensive fine-tuning and large datasets for specialized domains like law. General-purpose pre-training may not capture legal nuances, and acquiring sufficient legal data is challenging. We introduce SynLexLM, a novel approach to efficiently pre-train a legal LLM. Our method employs curriculum learning, progressing from simple to complex legal texts and queries, combined with synthetic data augmentation using models like Gemini Pro to address data scarcity. We aim to achieve improved performance on legal benchmarks (BigLaw-Bench, EUR-Lex-Sum) compared to traditional models and fine-tuned versions. Preliminary work involves generating synthetic QA pairs reflecting legal reasoning. This work aims to enhance legal document analysis and research tools, potentially democratizing access to advanced legal AI.
View on arXiv@article{upadhyay2025_2504.18762, title={ SynLexLM: Scaling Legal LLMs with Synthetic Data and Curriculum Learning }, author={ Ojasw Upadhyay and Abishek Saravanakumar and Ayman Ismail }, journal={arXiv preprint arXiv:2504.18762}, year={ 2025 } }