Hallucination Detection on a Budget: Efficient Bayesian Estimation of Semantic Entropy

Detecting whether an LLM hallucinates is an important research challenge. One promising way of doing so is to estimate the semantic entropy (Farquhar et al., 2024) of the distribution of generated sequences. We propose a new algorithm for doing that, with two main advantages. First, due to us taking the Bayesian approach, we achieve a much better quality of semantic entropy estimates for a given budget of samples from the LLM. Second, we are able to tune the number of samples adaptively so that `harder' contexts receive more samples. We demonstrate empirically that our approach systematically beats the baselines, requiring only 59% of samples used by Farquhar et al. (2024) to achieve the same quality of hallucination detection as measured by AUROC. Moreover, quite counterintuitively, our estimator is useful even with just one sample from the LLM.
View on arXiv@article{ciosek2025_2504.03579, title={ Hallucination Detection on a Budget: Efficient Bayesian Estimation of Semantic Entropy }, author={ Kamil Ciosek and Nicolò Felicioni and Sina Ghiassian }, journal={arXiv preprint arXiv:2504.03579}, year={ 2025 } }