ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.12528
46
0

Investigating Human-Aligned Large Language Model Uncertainty

16 March 2025
Kyle Moore
Jesse Roberts
Daryl Watson
Pamela Wisniewski
ArXivPDFHTML
Abstract

Recent work has sought to quantify large language model uncertainty to facilitate model control and modulate user trust. Previous works focus on measures of uncertainty that are theoretically grounded or reflect the average overt behavior of the model. In this work, we investigate a variety of uncertainty measures, in order to identify measures that correlate with human group-level uncertainty. We find that Bayesian measures and a variation on entropy measures, top-k entropy, tend to agree with human behavior as a function of model size. We find that some strong measures decrease in human-similarity with model size, but, by multiple linear regression, we find that combining multiple uncertainty measures provide comparable human-alignment with reduced size-dependency.

View on arXiv
@article{moore2025_2503.12528,
  title={ Investigating Human-Aligned Large Language Model Uncertainty },
  author={ Kyle Moore and Jesse Roberts and Daryl Watson and Pamela Wisniewski },
  journal={arXiv preprint arXiv:2503.12528},
  year={ 2025 }
}
Comments on this paper