This paper introduces the first formal definition of valuable hallucinations in large language models (LLMs),addressing a gap in the existingthis http URLprovide a systematic definition and analysis of hallucination value,proposing methods for enhancing the value ofthis http URLcontrast to previous works,which often treat hallucinations as a broad flaw,we focus on the potential value that certain types of hallucinations can offer in specificthis http URLin LLMs generally refer to the generation of unfaithful, fabricated,inconsistent,or nonsensicalthis http URLthan viewing all hallucinations negatively,this paper gives formal representations and manual judgments of "valuable hallucinations" and explores how realizable non-realistic propositions-ideas that are not currently true but could be achievable under certain conditions-can have constructivethis http URLpresent experiments using the Qwen2.5 model and HalluQA dataset, employing ReAct prompting (which involves reasoning, confidence assessment, and answer verification) to control and optimize hallucinations. Our findings show that ReAct prompting results in a reduction in overall hallucinations and an increase in the proportion of valuablethis http URLresults demonstrate that systematically controlling hallucinations can improve their usefulness without compromising factual reliability.
View on arXiv@article{chen2025_2502.11113, title={ Valuable Hallucinations: Realizable Non-realistic Propositions }, author={ Qiucheng Chen and Bo Wang }, journal={arXiv preprint arXiv:2502.11113}, year={ 2025 } }