ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2403.18771
28
3

CheckEval: A reliable LLM-as-a-Judge framework for evaluating text generation using checklists

27 March 2024
Yukyung Lee
Joonghoon Kim
Jaehee Kim
Hyowon Cho
Pilsung Kang
Pilsung Kang
Najoung Kim
    ELM
ArXivPDFHTML
Abstract

Existing LLM-as-a-Judge approaches for evaluating text generation suffer from rating inconsistencies, with low agreement and high rating variance across different evaluator models. We attribute this to subjective evaluation criteria combined with Likert scale scoring in existing protocols. To address this issue, we introduce CheckEval, a checklist-based evaluation framework that improves rating reliability via decomposed binary questions. Through experiments with 12 evaluator models across multiple datasets, we first demonstrate that CheckEval strongly correlates with human judgments, improving the average correlation with human judgments by 0.10. More importantly, CheckEval dramatically improves the average agreement across evaluator models by 0.45 and reduces the score variance. CheckEval scores furthermore have the benefit of being more interpretable because it decomposes evaluation criteria into traceable binary decisions, allowing analyses of specific attributes driving quality judgments.

View on arXiv
@article{lee2025_2403.18771,
  title={ CheckEval: A reliable LLM-as-a-Judge framework for evaluating text generation using checklists },
  author={ Yukyung Lee and Joonghoon Kim and Jaehee Kim and Hyowon Cho and Jaewook Kang and Pilsung Kang and Najoung Kim },
  journal={arXiv preprint arXiv:2403.18771},
  year={ 2025 }
}
Comments on this paper