109
v1v2 (latest)

HARMONY: Hidden Activation Representations and Model Output-Aware Uncertainty Estimation for Vision-Language Models

Main:9 Pages
8 Figures
Bibliography:4 Pages
9 Tables
Appendix:3 Pages
Abstract

Uncertainty Estimation (UE) plays a central role in quantifying the reliability of model outputs and reducing unsafe generations via selective prediction. In this regard, most existing probability-based UE approaches rely on predefined functions, aggregating token probabilities into a single UE score using heuristics such as length-normalization. However, these methods often fail to capture the complex relationships between generated tokens and struggle to identify biased probabilities often influenced by \textbf{language priors}. Another line of research uses hidden representations of the model and trains simple MLP architectures to predict uncertainty. However, such functions often lose the intricate \textbf{ inter-token dependencies}. While prior works show that hidden representations encode multimodal alignment signals, our work demonstrates that how these signals are processed has a significant impact on the UE performance. To effectively leverage these signals to identify inter-token dependencies, and vision-text alignment, we propose \textbf{HARMONY} (Hidden Activation Representations and Model Output-Aware Uncertainty Estimation for Vision-Language Models), a novel UE framework that integrates generated tokens ('text'), model's uncertainty score at the output ('MaxProb'), and its internal belief on the visual understanding of the image and the generated token (captured by 'hidden representations') at token level via appropriate input mapping design and suitable architecture choice. Our experimental experiments across two open-ended VQA benchmarks (A-OKVQA, and VizWiz) and four state-of-the-art VLMs (LLaVA-7B, LLaVA-13B, InstructBLIP, and Qwen-VL) show that HARMONY consistently matches or surpasses existing approaches, achieving up to 5\% improvement in AUROC and 9\% in PRR.

View on arXiv
Comments on this paper