0

Residual Decoding: Mitigating Hallucinations in Large Vision-Language Models via History-Aware Residual Guidance

Xinrong Chen
Xu Chu
Yingmin Qiu
Hengyuan Zhang
Jing Xiong
Shiyu Tang
Shuai Liu
Shaokang Yang
Cheng Yang
Hayden Kwok-Hay So
Ngai Wong
Main:8 Pages
8 Figures
Bibliography:3 Pages
7 Tables
Appendix:7 Pages
Abstract

Large Vision-Language Models (LVLMs) can reason effectively from image-text inputs and perform well in various multimodal tasks. Despite this success, they are affected by language priors and often produce hallucinations. Hallucinations denote generated content that is grammatically and syntactically coherent, yet bears no match or direct relevance to actual visual input. To address this problem, we propose Residual Decoding (ResDec). It is a novel training-free method that uses historical information to aid decoding. The method relies on the internal implicit reasoning mechanism and token logits evolution mechanism of LVLMs to correct biases. Extensive experiments demonstrate that ResDec effectively suppresses hallucinations induced by language priors, significantly improves visual grounding, and reduces object hallucinations. In addition to mitigating hallucinations, ResDec also performs exceptionally well on comprehensive LVLM benchmarks, highlighting its broad applicability.

View on arXiv
Comments on this paper