HalluVerse25: Fine-grained Multilingual Benchmark Dataset for LLM Hallucinations
Large Language Models (LLMs) are increasingly used in various contexts, yet remain prone to generating non-factual content, commonly referred to as "hallucinations". The literature categorizes hallucinations into several types, including entity-level, relation-level, and sentence-level hallucinations. However, existing hallucination datasets often fail to capture fine-grained hallucinations in multilingual settings. In this work, we introduce HalluVerse25, a multilingual LLM hallucination dataset that categorizes fine-grained hallucinations in English, Arabic, and Turkish. Our dataset construction pipeline uses an LLM to inject hallucinations into factual biographical sentences, followed by a rigorous human annotation process to ensure data quality. We evaluate several LLMs on HalluVerse25, providing valuable insights into how proprietary models perform in detecting LLM-generated hallucinations across different contexts.
View on arXiv@article{abdaljalil2025_2503.07833, title={ HalluVerse25: Fine-grained Multilingual Benchmark Dataset for LLM Hallucinations }, author={ Samir Abdaljalil and Hasan Kurban and Erchin Serpedin }, journal={arXiv preprint arXiv:2503.07833}, year={ 2025 } }