ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2402.16102
  4. Cited By
Interpreting Predictive Probabilities: Model Confidence or Human Label
  Variation?

Interpreting Predictive Probabilities: Model Confidence or Human Label Variation?

25 February 2024
Joris Baan
Raquel Fernández
Barbara Plank
Wilker Aziz
ArXivPDFHTML

Papers citing "Interpreting Predictive Probabilities: Model Confidence or Human Label Variation?"

8 / 8 papers shown
Title
Always Tell Me The Odds: Fine-grained Conditional Probability Estimation
Always Tell Me The Odds: Fine-grained Conditional Probability Estimation
Liaoyaqi Wang
Zhengping Jiang
Anqi Liu
Benjamin Van Durme
52
0
0
02 May 2025
Specializing Large Language Models to Simulate Survey Response Distributions for Global Populations
Specializing Large Language Models to Simulate Survey Response Distributions for Global Populations
Yong Cao
Haijiang Liu
Arnav Arora
Isabelle Augenstein
Paul Röttger
Daniel Hershcovich
49
1
0
20 Feb 2025
Improving Health Question Answering with Reliable and Time-Aware
  Evidence Retrieval
Improving Health Question Answering with Reliable and Time-Aware Evidence Retrieval
Juraj Vladika
Florian Matthes
RALM
21
5
0
12 Apr 2024
We're Afraid Language Models Aren't Modeling Ambiguity
We're Afraid Language Models Aren't Modeling Ambiguity
Alisa Liu
Zhaofeng Wu
Julian Michael
Alane Suhr
Peter West
Alexander Koller
Swabha Swayamdipta
Noah A. Smith
Yejin Choi
63
87
0
27 Apr 2023
Stop Measuring Calibration When Humans Disagree
Stop Measuring Calibration When Humans Disagree
Joris Baan
Wilker Aziz
Barbara Plank
Raquel Fernández
16
53
0
28 Oct 2022
Exploring Predictive Uncertainty and Calibration in NLP: A Study on the
  Impact of Method & Data Scarcity
Exploring Predictive Uncertainty and Calibration in NLP: A Study on the Impact of Method & Data Scarcity
Dennis Ulmer
J. Frellsen
Christian Hardmeier
174
22
0
20 Oct 2022
Calibration of Pre-trained Transformers
Calibration of Pre-trained Transformers
Shrey Desai
Greg Durrett
UQLM
234
288
0
17 Mar 2020
Are We Modeling the Task or the Annotator? An Investigation of Annotator
  Bias in Natural Language Understanding Datasets
Are We Modeling the Task or the Annotator? An Investigation of Annotator Bias in Natural Language Understanding Datasets
Mor Geva
Yoav Goldberg
Jonathan Berant
235
319
0
21 Aug 2019
1