ETHIC: Evaluating Large Language Models on Long-Context Tasks with High Information Coverage

Recent advancements in large language models (LLM) capable of processing extremely long texts highlight the need for a dedicated evaluation benchmark to assess their long-context capabilities. However, existing methods, like the needle-in-a-haystack test, do not effectively assess whether these models fully utilize contextual information, raising concerns about the reliability of current evaluation techniques. To thoroughly examine the effectiveness of existing benchmarks, we introduce a new metric called information coverage (IC), which quantifies the proportion of the input context necessary for answering queries. Our findings indicate that current benchmarks exhibit low IC; although the input context may be extensive, the actual usable context is often limited. To address this, we present ETHIC, a novel benchmark designed to assess LLMs' ability to leverage the entire context. Our benchmark comprises 1,986 test instances spanning four long-context tasks with high IC scores in the domains of books, debates, medicine, and law. Our evaluations reveal significant performance drops in contemporary LLMs, highlighting a critical challenge in managing long contexts. Our benchmark is available atthis https URL.
View on arXiv@article{lee2025_2410.16848, title={ ETHIC: Evaluating Large Language Models on Long-Context Tasks with High Information Coverage }, author={ Taewhoo Lee and Chanwoong Yoon and Kyochul Jang and Donghyeon Lee and Minju Song and Hyunjae Kim and Jaewoo Kang }, journal={arXiv preprint arXiv:2410.16848}, year={ 2025 } }