ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2409.00352
25
0

Does Alignment Tuning Really Break LLMs' Internal Confidence?

31 August 2024
Hongseok Oh
Wonseok Hwang
ArXivPDFHTML
Abstract

Large Language Models (LLMs) have shown remarkable progress, but their real-world application necessitates reliable calibration. This study conducts a comprehensive analysis of calibration degradation of LLMs across four dimensions: models, calibration metrics, tasks, and confidence extraction methods. Initial analysis showed that the relationship between alignment and calibration is not always a trade-off, but under stricter analysis conditions, we found the alignment process consistently harms calibration. This highlights the need for (1) a careful approach when measuring model confidences and calibration errors and (2) future research into algorithms that can help LLMs to achieve both instruction-following and calibration without sacrificing either.

View on arXiv
@article{oh2025_2409.00352,
  title={ Does Alignment Tuning Really Break LLMs' Internal Confidence? },
  author={ Hongseok Oh and Wonseok Hwang },
  journal={arXiv preprint arXiv:2409.00352},
  year={ 2025 }
}
Comments on this paper