Is LLMs Hallucination Usable? LLM-based Negative Reasoning for Fake News Detection
The questionable responses caused by knowledge hallucination may lead to LLMs' unstable ability in decision-making. However, it has never been investigated whether the LLMs' hallucination is possibly usable to generate negative reasoning for facilitating the detection of fake news. This study proposes a novel supervised self-reinforced reasoning rectification approach - SR that yields both common reasonable reasoning and wrong understandings (negative reasoning) for news via LLMs reflection for semantic consistency learning. Upon that, we construct a negative reasoning-based news learning model called - \emph{NRFE}, which leverages positive or negative news-reasoning pairs for learning the semantic consistency between them. To avoid the impact of label-implicated reasoning, we deploy a student model - \emph{NRFE-D} that only takes news content as input to inspect the performance of our method by distilling the knowledge from \emph{NRFE}. The experimental results verified on three popular fake news datasets demonstrate the superiority of our method compared with three kinds of baselines including prompting on LLMs, fine-tuning on pre-trained SLMs, and other representative fake news detection methods.
View on arXiv@article{zhang2025_2503.09153, title={ Is LLMs Hallucination Usable? LLM-based Negative Reasoning for Fake News Detection }, author={ Chaowei Zhang and Zongling Feng and Zewei Zhang and Jipeng Qiang and Guandong Xu and Yun Li }, journal={arXiv preprint arXiv:2503.09153}, year={ 2025 } }