31
0

Uncertainty Profiles for LLMs: Uncertainty Source Decomposition and Adaptive Model-Metric Selection

Abstract

Large language models (LLMs) often generate fluent but factually incorrect outputs, known as hallucinations, which undermine their reliability in real-world applications. While uncertainty estimation has emerged as a promising strategy for detecting such errors, current metrics offer limited interpretability and lack clarity about the types of uncertainty they capture. In this paper, we present a systematic framework for decomposing LLM uncertainty into four distinct sources, inspired by previous research. We develop a source-specific estimation pipeline to quantify these uncertainty types and evaluate how existing metrics relate to each source across tasks and models. Our results show that metrics, task, and model exhibit systematic variation in uncertainty characteristic. Building on this, we propose a method for task specific metric/model selection guided by the alignment or divergence between their uncertainty characteristics and that of a given task. Our experiments across datasets and models demonstrate that our uncertainty-aware selection strategy consistently outperforms baseline strategies, helping us select appropriate models or uncertainty metrics, and contributing to more reliable and efficient deployment in uncertainty estimation.

View on arXiv
@article{guo2025_2505.07309,
  title={ Uncertainty Profiles for LLMs: Uncertainty Source Decomposition and Adaptive Model-Metric Selection },
  author={ Pei-Fu Guo and Yun-Da Tsai and Shou-De Lin },
  journal={arXiv preprint arXiv:2505.07309},
  year={ 2025 }
}
Comments on this paper