92
v1v2v3 (latest)

Dist2ill: Distributional Distillation for One-Pass Uncertainty Estimation in Large Language Models

Qi Xu
Tunyu Zhang
Yi Wang
Ligong Han
Bradley A. Malin
Hao Wang
Main:8 Pages
4 Figures
Bibliography:4 Pages
6 Tables
Appendix:16 Pages
Abstract

Large Language Models (LLMs) often exhibit misalignment between the quality of their generated responses and the confidence estimates they assign to them. Bayesian treatments, such as marginalizing over a reliable weight posterior or over the space of reasoning traces, provide an effective remedy, but incur substantial computational overhead due to repeated sampling at test time. To enable accurate uncertainty estimation in a single forward pass, we propose a novel distributional distillation framework (Dist2ill) that trains an LLM to produce multiple diverse reasoning paths within one inference pass, while using a lightweight parametric module to approximate empirical confidence scores derived from the sampling distribution. Extensive experiments demonstrate that Dist2ill preserves reasoning diversity and achieves state-of-the-art uncertainty estimation, substantially improving Expected Calibration Error (ECE) and Negative Log-Likelihood (NLL), while remaining computationally efficient.

View on arXiv
Comments on this paper