31
3

DHP Benchmark: Are LLMs Good NLG Evaluators?

Abstract

Large Language Models (LLMs) are increasingly serving as evaluators in Natural Language Generation (NLG) tasks; this is often referred to as ``LLM-as-a-judge'' paradigm. However, the capabilities of LLMs in evaluating NLG quality remain underexplored. Current studies depend on human assessments and simple metrics that fail to capture the discernment of LLMs across diverse NLG tasks. To address this gap, we propose the Discernment of Hierarchical Perturbation (DHP) benchmarking framework, which provides quantitative discernment scores for LLMs. This framework leverages hierarchically perturbed text data and statistical tests to systematically measure the NLG evaluation capabilities of LLMs. We re-established six evaluation datasets for this benchmark, covering four NLG tasks: Summarization, Story Completion, Question Answering, and Translation. Our comprehensive benchmarking of five major LLM families provides critical insight into their strengths and limitations as NLG evaluators. Our dataset is available atthis https URL.

View on arXiv
@article{wang2025_2408.13704,
  title={ DHP Benchmark: Are LLMs Good NLG Evaluators? },
  author={ Yicheng Wang and Jiayi Yuan and Yu-Neng Chuang and Zhuoer Wang and Yingchi Liu and Mark Cusick and Param Kulkarni and Zhengping Ji and Yasser Ibrahim and Xia Hu },
  journal={arXiv preprint arXiv:2408.13704},
  year={ 2025 }
}
Comments on this paper