Distill-C: Enhanced NL2SQL via Distilled Customization with LLMs

The growing adoption of large language models (LLMs) in business applications has amplified interest in Natural Language to SQL (NL2SQL) solutions, in which there is competing demand for high performance and efficiency. Domain- and customer-specific requirements further complicate the problem. To address this conundrum, we introduce Distill-C, a distilled customization framework tailored for NL2SQL tasks. Distill-C utilizes large teacher LLMs to produce high-quality synthetic data through a robust and scalable pipeline. Finetuning smaller and open-source LLMs on this synthesized data enables them to rival or outperform teacher models an order of magnitude larger. Evaluated on multiple challenging benchmarks, Distill-C achieves an average improvement of 36% in execution accuracy compared to the base models from three distinct LLM families. Additionally, on three internal customer benchmarks, Distill-C demonstrates a 22.6% performance improvement over the base models. Our results demonstrate that Distill-C is an effective, high-performing and generalizable approach for deploying lightweight yet powerful NL2SQL models, delivering exceptional accuracies while maintaining low computational cost.
View on arXiv@article{hoang2025_2504.00048, title={ Distill-C: Enhanced NL2SQL via Distilled Customization with LLMs }, author={ Cong Duy Vu Hoang and Gioacchino Tangari and Clemence Lanfranchi and Dalu Guo and Paul Cayet and Steve Siu and Don Dharmasiri and Yuan-Fang Li and Long Duong and Damien Hilloulin and Rhicheek Patra and Sungpack Hong and Hassan Chafi }, journal={arXiv preprint arXiv:2504.00048}, year={ 2025 } }