103

Watch Closely: Mitigating Object Hallucinations in Large Vision-Language Models with Disentangled Decoding

Ruiqi Ma
Yu Yan
Chunhong Zhang
Minghao Yin
XinChao Liu
Zhihong Jin
Zheng Hu
Main:9 Pages
9 Figures
Bibliography:2 Pages
5 Tables
Appendix:4 Pages
Abstract

Large Vision-Language Models (LVLMs) bridge the gap between visual and linguistic modalities, demonstrating strong potential across a variety of domains. However, despite significant progress, LVLMs still suffer from severe hallucination issues in object recognition tasks. These models often fail to accurately identify certain objects, leading to text generation that appears fluent but does not correspond to the visual content, which can have serious consequences in real-world applications. Recently, several methods have been proposed to alleviate LVLM hallucinations, but most focus solely on reducing hallucinations in the language modality. To mitigate hallucinations in both the language and visual modalities, we introduce Hallucination Disentangled Decoding (HDD) method that requires no training. HDD enhances the original image by segmenting it and selecting images that augment the original, while also utilizing a blank image to eliminate language prior hallucinations in both the original and segmented images. This design not only reduces the model's dependence on language priors but also enhances its visual performance. (Code:this https URL)

View on arXiv
Comments on this paper