394
v1v2v3v4 (latest)

First Hallucination Tokens Are Different from Conditional Ones

Main:4 Pages
39 Figures
Bibliography:3 Pages
1 Tables
Appendix:37 Pages
Abstract

Large Language Models (LLMs) hallucinate, and detecting these cases is key to ensuring trust. While many approaches address hallucination detection at the response or span level, recent work explores token-level detection, enabling more fine-grained intervention. However, the distribution of hallucination signal across sequences of hallucinated tokens remains unexplored. We leverage token-level annotations from the RAGTruth corpus and find that the first hallucinated token is far more detectable than later ones. This structural property holds across models, suggesting that first hallucination tokens play a key role in token-level hallucination detection. Our code is available atthis https URL.

View on arXiv
Comments on this paper