238

Can LLMs Detect Their Own Hallucinations?

Main:4 Pages
10 Figures
Bibliography:3 Pages
7 Tables
Appendix:1 Pages
Abstract

Large language models (LLMs) can generate fluent responses, but sometimes hallucinate facts. In this paper, we investigate whether LLMs can detect their own hallucinations. We formulate hallucination detection as a classification task of a sentence. We propose a framework for estimating LLMs' capability of hallucination detection and a classification method using Chain-of-Thought (CoT) to extract knowledge from their parameters. The experimental results indicated that GPT-3.53.5 Turbo with CoT detected 58.2%58.2\% of its own hallucinations. We concluded that LLMs with CoT can detect hallucinations if sufficient knowledge is contained in their parameters.

View on arXiv
Comments on this paper