Bye-bye, Bluebook? Automating Legal Procedure with Large Language Models

Legal practice requires careful adherence to procedural rules. In the United States, few are more complex than those found in The Bluebook: A Uniform System of Citation. Compliance with this system's 500+ pages of byzantine formatting instructions is the raison détre of thousands of student law review editors and the bete noire of lawyers everywhere. To evaluate whether large language models (LLMs) are able to adhere to the procedures of such a complicated system, we construct an original dataset of 866 Bluebook tasks and test flagship LLMs from OpenAI, Anthropic, Google, Meta, and DeepSeek. We show (1) that these models produce fully compliant Bluebook citations only 69%-74% of the time and (2) that in-context learning on the Bluebook's underlying system of rules raises accuracy only to 77%. These results caution against using off-the-shelf LLMs to automate aspects of the law where fidelity to procedure is paramount.
View on arXiv@article{dahl2025_2505.02763, title={ Bye-bye, Bluebook? Automating Legal Procedure with Large Language Models }, author={ Matthew Dahl }, journal={arXiv preprint arXiv:2505.02763}, year={ 2025 } }