24
0

TransLaw: Benchmarking Large Language Models in Multi-Agent Simulation of the Collaborative Translation

Xi Xuan
King-kui Sin
Yufei Zhou
Chunyu Kit
Main:8 Pages
2 Figures
Bibliography:3 Pages
6 Tables
Appendix:5 Pages
Abstract

Multi-agent systems empowered by large language models (LLMs) have demonstrated remarkable capabilities in a wide range of downstream applications, including machine translation. However, the potential of LLMs in translating Hong Kong legal judgments remains uncertain due to challenges such as intricate legal terminology, culturally embedded nuances, and strict linguistic structures. In this work, we introduce TransLaw, a novel multi-agent framework implemented for real-world Hong Kong case law translation. It employs three specialized agents, namely, Translator, Annotator, and Proofreader, to collaboratively produce translations for high accuracy in legal meaning, appropriateness in style, and adequate coherence and cohesion in structure. This framework supports customizable LLM configurations and achieves tremendous cost reduction compared to professional human translation services. We evaluated its performance using 13 open-source and commercial LLMs as agents and obtained interesting findings, including that it surpasses GPT-4o in legal semantic accuracy, structural coherence, and stylistic fidelity, yet trails human experts in contextualizing complex terminology and stylistic naturalness. Our platform website is available at CityUHK, and our bilingual judgment corpus used for the evaluation is available at Hugging Face.

View on arXiv
@article{xuan2025_2507.00875,
  title={ TransLaw: Benchmarking Large Language Models in Multi-Agent Simulation of the Collaborative Translation },
  author={ Xi Xuan and King-kui Sin and Yufei Zhou and Chunyu Kit },
  journal={arXiv preprint arXiv:2507.00875},
  year={ 2025 }
}
Comments on this paper