32
15

THRONE: An Object-based Hallucination Benchmark for the Free-form Generations of Large Vision-Language Models

Abstract

Mitigating hallucinations in large vision-language models (LVLMs) remains an open problem. Recent benchmarks do not address hallucinations in open-ended free-form responses, which we term "Type I hallucinations". Instead, they focus on hallucinations responding to very specific question formats -- typically a multiple-choice response regarding a particular object or attribute -- which we term "Type II hallucinations". Additionally, such benchmarks often require external API calls to models which are subject to change. In practice, we observe that a reduction in Type II hallucinations does not lead to a reduction in Type I hallucinations but rather that the two forms of hallucinations are often anti-correlated. To address this, we propose THRONE, a novel object-based automatic framework for quantitatively evaluating Type I hallucinations in LVLM free-form outputs. We use public language models (LMs) to identify hallucinations in LVLM responses and compute informative metrics. By evaluating a large selection of recent LVLMs using public datasets, we show that an improvement in existing metrics do not lead to a reduction in Type I hallucinations, and that established benchmarks for measuring Type I hallucinations are incomplete. Finally, we provide a simple and effective data augmentation method to reduce Type I and Type II hallucinations as a strong baseline. Code is now available atthis https URL.

View on arXiv
@article{kaul2025_2405.05256,
  title={ THRONE: An Object-based Hallucination Benchmark for the Free-form Generations of Large Vision-Language Models },
  author={ Prannay Kaul and Zhizhong Li and Hao Yang and Yonatan Dukler and Ashwin Swaminathan and C. J. Taylor and Stefano Soatto },
  journal={arXiv preprint arXiv:2405.05256},
  year={ 2025 }
}
Comments on this paper