ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.11028
39
1

Mind the Confidence Gap: Overconfidence, Calibration, and Distractor Effects in Large Language Models

16 February 2025
Prateek Chhikara
ArXivPDFHTML
Abstract

Large Language Models (LLMs) demonstrate impressive performance across diverse tasks, yet confidence calibration remains a challenge. Miscalibration - where models are overconfident or underconfident - poses risks, particularly in high-stakes applications. This paper presents an empirical study on LLM calibration, examining how model size, distractors, and question types affect confidence alignment. We introduce an evaluation framework to measure overconfidence and investigate whether multiple-choice formats mitigate or worsen miscalibration. Our findings show that while larger models (e.g., GPT-4o) are better calibrated overall, they are more prone to distraction, whereas smaller models benefit more from answer choices but struggle with uncertainty estimation. Unlike prior work, which primarily reports miscalibration trends, we provide actionable insights into failure modes and conditions that worsen overconfidence. These findings highlight the need for calibration-aware interventions and improved uncertainty estimation methods.

View on arXiv
@article{chhikara2025_2502.11028,
  title={ Mind the Confidence Gap: Overconfidence, Calibration, and Distractor Effects in Large Language Models },
  author={ Prateek Chhikara },
  journal={arXiv preprint arXiv:2502.11028},
  year={ 2025 }
}
Comments on this paper