191
0

M-MAD: Multidimensional Multi-Agent Debate for Advanced Machine Translation Evaluation

Abstract

Recent advancements in large language models (LLMs) have given rise to the LLM-as-a-judge paradigm, showcasing their potential to deliver human-like judgments. However, in the field of machine translation (MT) evaluation, current LLM-as-a-judge methods fall short of learned automatic metrics. In this paper, we propose Multidimensional Multi-Agent Debate (M-MAD), a systematic LLM-based multi-agent framework for advanced LLM-as-a-judge MT evaluation. Our findings demonstrate that M-MAD achieves significant advancements by (1) decoupling heuristic MQM criteria into distinct evaluation dimensions for fine-grained assessments; (2) employing multi-agent debates to harness the collaborative reasoning capabilities of LLMs; (3) synthesizing dimension-specific results into a final evaluation judgment to ensure robust and reliable outcomes. Comprehensive experiments show that M-MAD not only outperforms all existing LLM-as-a-judge methods but also competes with state-of-the-art reference-based automatic metrics, even when powered by a suboptimal model like GPT-4o mini. Detailed ablations and analysis highlight the superiority of our framework design, offering a fresh perspective for LLM-as-a-judge paradigm. Our code and data are publicly available atthis https URL.

View on arXiv
@article{feng2025_2412.20127,
  title={ M-MAD: Multidimensional Multi-Agent Debate for Advanced Machine Translation Evaluation },
  author={ Zhaopeng Feng and Jiayuan Su and Jiamei Zheng and Jiahan Ren and Yan Zhang and Jian Wu and Hongwei Wang and Zuozhu Liu },
  journal={arXiv preprint arXiv:2412.20127},
  year={ 2025 }
}
Comments on this paper