46
0

SINdex: Semantic INconsistency Index for Hallucination Detection in LLMs

Abstract

Large language models (LLMs) are increasingly deployed across diverse domains, yet they are prone to generating factually incorrect outputs - commonly known as "hallucinations." Among existing mitigation strategies, uncertainty-based methods are particularly attractive due to their ease of implementation, independence from external data, and compatibility with standard LLMs. In this work, we introduce a novel and scalable uncertainty-based semantic clustering framework for automated hallucination detection. Our approach leverages sentence embeddings and hierarchical clustering alongside a newly proposed inconsistency measure, SINdex, to yield more homogeneous clusters and more accurate detection of hallucination phenomena across various LLMs. Evaluations on prominent open- and closed-book QA datasets demonstrate that our method achieves AUROC improvements of up to 9.3% over state-of-the-art techniques. Extensive ablation studies further validate the effectiveness of each component in our framework.

View on arXiv
@article{abdaljalil2025_2503.05980,
  title={ SINdex: Semantic INconsistency Index for Hallucination Detection in LLMs },
  author={ Samir Abdaljalil and Hasan Kurban and Parichit Sharma and Erchin Serpedin and Rachad Atat },
  journal={arXiv preprint arXiv:2503.05980},
  year={ 2025 }
}
Comments on this paper