ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2412.17411
41
0

Pretraining with random noise for uncertainty calibration

23 December 2024
Jeonghwan Cheon
Se-Bum Paik
    OnRL
ArXivPDFHTML
Abstract

Uncertainty calibration is crucial for various machine learning applications, yet it remains challenging. Many models exhibit hallucinations - confident yet inaccurate responses - due to miscalibrated confidence. Here, we show that the common practice of random initialization in deep learning, often considered a standard technique, is an underlying cause of this miscalibration, leading to excessively high confidence in untrained networks. Our method, inspired by developmental neuroscience, addresses this issue by simply pretraining networks with random noise and labels, reducing overconfidence and bringing initial confidence levels closer to chance. This ensures optimal calibration, aligning confidence with accuracy during subsequent data training, without the need for additional pre- or post-processing. Pre-calibrated networks excel at identifying "unknown data," showing low confidence for out-of-distribution inputs, thereby resolving confidence miscalibration.

View on arXiv
@article{cheon2025_2412.17411,
  title={ Pretraining with random noise for uncertainty calibration },
  author={ Jeonghwan Cheon and Se-Bum Paik },
  journal={arXiv preprint arXiv:2412.17411},
  year={ 2025 }
}
Comments on this paper