Scale-free Characteristics of Multilingual Legal Texts and the Limitations of LLMs
- AILaw
We present a comparative analysis of text complexity across domains using scale-free metrics. We quantify linguistic complexity via Heaps' exponent (vocabulary growth), Taylor's exponent (word-frequency fluctuation scaling), compression rate (redundancy), and entropy. Our corpora span three domains: legal documents (statutes, cases, deeds) as a specialized domain, general natural language texts (literature, Wikipedia), and AI-generated (GPT) text. We find that legal texts exhibit slower vocabulary growth (lower ) and higher term consistency (higher ) than general texts. Within legal domain, statutory codes have the lowest and highest , reflecting strict drafting conventions, while cases and deeds show higher and lower . In contrast, GPT-generated text shows the statistics more aligning with general language patterns. These results demonstrate that legal texts exhibit domain-specific structures and complexities, which current generative models do not fully replicate.
View on arXiv