24
0

Critique Before Thinking: Mitigating Hallucination through Rationale-Augmented Instruction Tuning

Abstract

Despite significant advancements in multimodal reasoning tasks, existing Large Vision-Language Models (LVLMs) are prone to producing visually ungrounded responses when interpreting associated images. In contrast, when humans embark on learning new knowledge, they often rely on a set of fundamental pre-study principles: reviewing outlines to grasp core concepts, summarizing key points to guide their focus and enhance understanding. However, such preparatory actions are notably absent in the current instruction tuning processes. This paper presents Re-Critic, an easily scalable rationale-augmented framework designed to incorporate fundamental rules and chain-of-thought (CoT) as a bridge to enhance reasoning abilities. Specifically, Re-Critic develops a visual rationale synthesizer that scalably augments raw instructions with rationale explanation. To probe more contextually grounded responses, Re-Critic employs an in-context self-critic mechanism to select response pairs for preference tuning. Experiments demonstrate that models fine-tuned with our rationale-augmented dataset yield gains that extend beyond hallucination-specific tasks to broader multimodal reasoning tasks.

View on arXiv
@article{yang2025_2505.07172,
  title={ Critique Before Thinking: Mitigating Hallucination through Rationale-Augmented Instruction Tuning },
  author={ Zexian Yang and Dian Li and Dayan Wu and Gang Liu and Weiping Wang },
  journal={arXiv preprint arXiv:2505.07172},
  year={ 2025 }
}
Comments on this paper