35
0

HRET: A Self-Evolving LLM Evaluation Toolkit for Korean

Abstract

Recent advancements in Korean large language models (LLMs) have spurred numerous benchmarks and evaluation methodologies, yet the lack of a standardized evaluation framework has led to inconsistent results and limited comparability. To address this, we introduce HRET Haerae Evaluation Toolkit, an open-source, self-evolving evaluation framework tailored specifically for Korean LLMs. HRET unifies diverse evaluation methods, including logit-based scoring, exact-match, language-inconsistency penalization, and LLM-as-a-Judge assessments. Its modular, registry-based architecture integrates major benchmarks (HAE-RAE Bench, KMMLU, KUDGE, HRM8K) and multiple inference backends (vLLM, HuggingFace, OpenAI-compatible endpoints). With automated pipelines for continuous evolution, HRET provides a robust foundation for reproducible, fair, and transparent Korean NLP research.

View on arXiv
@article{lee2025_2503.22968,
  title={ HRET: A Self-Evolving LLM Evaluation Toolkit for Korean },
  author={ Hanwool Lee and Soo Yong Kim and Dasol Choi and SangWon Baek and Seunghyeok Hong and Ilgyun Jeong and Inseon Hwang and Naeun Lee and Guijin Son },
  journal={arXiv preprint arXiv:2503.22968},
  year={ 2025 }
}
Comments on this paper