32
0

Don't Let It Hallucinate: Premise Verification via Retrieval-Augmented Logical Reasoning

Abstract

Large language models (LLMs) have shown substantial capacity for generating fluent, contextually appropriate responses. However, they can produce hallucinated outputs, especially when a user query includes one or more false premises-claims that contradict established facts. Such premises can mislead LLMs into offering fabricated or misleading details. Existing approaches include pretraining, fine-tuning, and inference-time techniques that often rely on access to logits or address hallucinations after they occur. These methods tend to be computationally expensive, require extensive training data, or lack proactive mechanisms to prevent hallucination before generation, limiting their efficiency in real-time applications. We propose a retrieval-based framework that identifies and addresses false premises before generation. Our method first transforms a user's query into a logical representation, then applies retrieval-augmented generation (RAG) to assess the validity of each premise using factual sources. Finally, we incorporate the verification results into the LLM's prompt to maintain factual consistency in the final output. Experiments show that this approach effectively reduces hallucinations, improves factual accuracy, and does not require access to model logits or large-scale fine-tuning.

View on arXiv
@article{qin2025_2504.06438,
  title={ Don't Let It Hallucinate: Premise Verification via Retrieval-Augmented Logical Reasoning },
  author={ Yuehan Qin and Shawn Li and Yi Nian and Xinyan Velocity Yu and Yue Zhao and Xuezhe Ma },
  journal={arXiv preprint arXiv:2504.06438},
  year={ 2025 }
}
Comments on this paper