ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.17500
29
0

Variance Control via Weight Rescaling in LLM Pre-training

21 March 2025
Louis Owen
Abhay Kumar
Nilabhra Roy Chowdhury
Fabian Güra
ArXivPDFHTML
Abstract

The outcome of Large Language Model (LLM) pre-training strongly depends on weight initialization and variance control strategies. Although the importance of initial variance control has been well documented in neural networks in general, the literature on initialization and management of its growth during LLM pre-training, specifically, is somewhat sparse. In this paper, we introduce the Layer Index Rescaling (LIR) weight initialization scheme, and the Target Variance Rescaling (TVR) variance control strategy. Experiments on a 1B parameter LLaMA model demonstrate that better variance management using these techniques yields substantial improvements in downstream task performance (up to 4.6% on common pre-training benchmarks) and reduces extreme activation values, thus mitigating challenges associated with quantization and low-precision training. Our code is available at:this https URL.

View on arXiv
@article{owen2025_2503.17500,
  title={ Variance Control via Weight Rescaling in LLM Pre-training },
  author={ Louis Owen and Abhay Kumar and Nilabhra Roy Chowdhury and Fabian Güra },
  journal={arXiv preprint arXiv:2503.17500},
  year={ 2025 }
}
Comments on this paper