17

Detecting Contextual Hallucinations in LLMs with Frequency-Aware Attention

Siya Qi
Yudong Chen
Runcong Zhao
Qinglin Zhu
Zhanghao Hu
Wei Liu
Yulan He
Zheng Yuan
Lin Gui
Main:8 Pages
11 Figures
Bibliography:3 Pages
10 Tables
Appendix:14 Pages
Abstract

Hallucination detection is critical for ensuring the reliability of large language models (LLMs) in context-based generation. Prior work has explored intrinsic signals available during generation, among which attention offers a direct view of grounding behavior. However, existing approaches typically rely on coarse summaries that fail to capture fine-grained instabilities in attention. Inspired by signal processing, we introduce a frequency-aware perspective on attention by analyzing its variation during generation. We model attention distributions as discrete signals and extract high-frequency components that reflect rapid local changes in attention. Our analysis reveals that hallucinated tokens are associated with high-frequency attention energy, reflecting fragmented and unstable grounding behavior. Based on this insight, we develop a lightweight hallucination detector using high-frequency attention features. Experiments on the RAGTruth and HalluRAG benchmarks show that our approach achieves performance gains over verification-based, internal-representation-based, and attention-based methods across models and tasks.

View on arXiv
Comments on this paper