161

Evaluating the Consistency of LLM Evaluators

International Conference on Computational Linguistics (COLING), 2024
Main:4 Pages
4 Figures
Bibliography:2 Pages
7 Tables
Appendix:4 Pages
Abstract

Large language models (LLMs) have shown potential as general evaluators along with the evident benefits of speed and cost. While their correlation against human annotators has been widely studied, consistency as evaluators is still understudied, raising concerns about the reliability of LLM evaluators. In this paper, we conduct extensive studies on the two aspects of consistency in LLM evaluations, Self-Consistency (SC) and Inter-scale Consistency (IC), on different scoring scales and criterion granularity with open-source and proprietary models. Our comprehensive analysis demonstrates that strong proprietary models are not necessarily consistent evaluators, highlighting the importance of considering consistency in assessing the capability of LLM evaluators.

View on arXiv
Comments on this paper