41
0

HalluciNot: Hallucination Detection Through Context and Common Knowledge Verification

Abstract

This paper introduces a comprehensive system for detecting hallucinations in large language model (LLM) outputs in enterprise settings. We present a novel taxonomy of LLM responses specific to hallucination in enterprise applications, categorizing them into context-based, common knowledge, enterprise-specific, and innocuous statements. Our hallucination detection model HDM-2 validates LLM responses with respect to both context and generally known facts (common knowledge). It provides both hallucination scores and word-level annotations, enabling precise identification of problematic content. To evaluate it on context-based and common-knowledge hallucinations, we introduce a new dataset HDMBench. Experimental results demonstrate that HDM-2 out-performs existing approaches across RagTruth, TruthfulQA, and HDMBench datasets. This work addresses the specific challenges of enterprise deployment, including computational efficiency, domain specialization, and fine-grained error identification. Our evaluation dataset, model weights, and inference code are publicly available.

View on arXiv
@article{paudel2025_2504.07069,
  title={ HalluciNot: Hallucination Detection Through Context and Common Knowledge Verification },
  author={ Bibek Paudel and Alexander Lyzhov and Preetam Joshi and Puneet Anand },
  journal={arXiv preprint arXiv:2504.07069},
  year={ 2025 }
}
Comments on this paper