Octopus: Alleviating Hallucination via Dynamic Contrastive Decoding
Large Vision-Language Models (LVLMs) have obtained impressive performance in visual content understanding and multi-modal reasoning. Unfortunately, these large models suffer from serious hallucination problems and tend to generate fabricated responses. Recently, several Contrastive Decoding (CD) strategies have been proposed to alleviate hallucination by introducing disturbed inputs. Although great progress has been made, these CD strategies mostly apply a one-size-fits-all approach for all input conditions. In this paper, we revisit this process through extensive experiments. Related results show that hallucination causes are hybrid and each generative step faces a unique hallucination challenge. Leveraging these meaningful insights, we introduce a simple yet effective Octopus-like framework that enables the model to adaptively identify hallucination types and create a dynamic CD workflow. Our Octopus framework not only outperforms existing methods across four benchmarks but also demonstrates excellent deployability and expansibility. Code is available atthis https URL.
View on arXiv@article{suo2025_2503.00361, title={ Octopus: Alleviating Hallucination via Dynamic Contrastive Decoding }, author={ Wei Suo and Lijun Zhang and Mengyang Sun and Lin Yuanbo Wu and Peng Wang and Yanning Zhang }, journal={arXiv preprint arXiv:2503.00361}, year={ 2025 } }