620
v1v2v3 (latest)

AutoHall: Automated Factuality Hallucination Dataset Generation for Large Language Models

IEEE Transactions on Audio, Speech, and Language Processing (IEEE TASLP), 2023
6 Figures
Appendix:14 Pages
Abstract

Large language models (LLMs) have gained broad applications across various domains but still struggle with hallucinations. Currently, hallucinations occur frequently in the generation of factual content and pose a great challenge to trustworthy LLMs. However, hallucination detection is hindered by the laborious and expensive manual annotation of hallucinatory content. Meanwhile, as different LLMs exhibit distinct types and rates of hallucination, the collection of hallucination datasets is inherently model-specific, which also increases the cost. To address this issue, this paper proposes a method called AutoHall\textbf{AutoHall} for Auto\underline{Auto}matically constructing model-specific Hall\underline{Hall}ucination datasets based on existing fact-checking datasets. The empirical results reveal variations in hallucination proportions and types among different models. Moreover, we introduce a zero-resource and black-box hallucination detection method based on self-contradiction to recognize the hallucination in our constructed dataset, achieving superior detection performance compared to baselines. Further analysis on our dataset provides insight into factors that may contribute to LLM hallucinations. Our codes and datasets are publicly available atthis https URL.

View on arXiv
Comments on this paper