349
v1v2 (latest)

MetaFaith: Faithful Natural Language Uncertainty Expression in LLMs

Main:15 Pages
22 Figures
Bibliography:7 Pages
16 Tables
Appendix:20 Pages
Abstract

A critical component in the trustworthiness of LLMs is reliable uncertainty communication, yet LLMs often use assertive language when conveying false claims, leading to over-reliance and eroded trust. We present the first systematic study of faithful confidence calibration\textit{faithful confidence calibration} of LLMs, benchmarking models' ability to use linguistic expressions of uncertainty that faithfully reflect\textit{faithfully reflect} their intrinsic uncertainty, across a comprehensive array of models, datasets, and prompting strategies. Our results demonstrate that LLMs largely fail at this task, and that existing interventions are insufficient: standard prompt approaches provide only marginal gains, and existing, factuality-based calibration techniques can even harm faithful calibration. To address this critical gap, we introduce MetaFaith, a novel prompt-based calibration approach inspired by human metacognition. We show that MetaFaith robustly improves faithful calibration across diverse models and task domains, enabling up to 61% improvement in faithfulness and achieving an 83% win rate over original generations as judged by humans.

View on arXiv
Comments on this paper