52
0

Mitigating Hallucinations in Large Vision-Language Models by Adaptively Constraining Information Flow

Abstract

Large vision-language models show tremendous potential in understanding visual information through human languages. However, they are prone to suffer from object hallucination, i.e., the generated image descriptions contain objects that do not exist in the image. In this paper, we reveal that object hallucination can be attributed to overconfidence in irrelevant visual features when soft visual tokens map to the LLM's word embedding space. Specifically, by figuring out the semantic similarity between visual tokens and LLM's word embedding, we observe that the smoothness of similarity distribution strongly correlates with the emergence of object hallucinations. To mitigate hallucinations, we propose using the Variational Information Bottleneck (VIB) to alleviate overconfidence by introducing stochastic noise, facilitating the constraining of irrelevant information. Furthermore, we propose an entropy-based noise-controlling strategy to enable the injected noise to be adaptively constrained regarding the smoothness of the similarity distribution. We adapt the proposed AdaVIB across distinct model architectures. Experimental results demonstrate that the proposed AdaVIB mitigates object hallucinations by effectively alleviating the overconfidence in irrelevant visual features, with consistent improvements on two object hallucination benchmarks.

View on arXiv
@article{bai2025_2502.20750,
  title={ Mitigating Hallucinations in Large Vision-Language Models by Adaptively Constraining Information Flow },
  author={ Jiaqi Bai and Hongcheng Guo and Zhongyuan Peng and Jian Yang and Zhoujun Li and Mohan Li and Zhihong Tian },
  journal={arXiv preprint arXiv:2502.20750},
  year={ 2025 }
}
Comments on this paper