ResearchTrend.AI
  • Papers
  • Communities
  • Organizations
  • Events
  • Blog
  • Pricing
  • Feedback
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2508.16676
12
0

WISCA: A Lightweight Model Transition Method to Improve LLM Training via Weight Scaling

21 August 2025
Jiacheng Li
Jianchao Tan
Zhidong Yang
Pingwei Sun
Feiye Huo
Jiayu Qin
Yerui Sun
Yuchen Xie
Xunliang Cai
Xiangyu Zhang
Maoxin He
Guangming Tan
Weile Jia
Tong Zhao
ArXiv (abs)PDFHTMLGithub
Main:7 Pages
7 Figures
Bibliography:2 Pages
7 Tables
Appendix:2 Pages
Abstract

Transformer architecture gradually dominates the LLM field. Recent advances in training optimization for Transformer-based large language models (LLMs) primarily focus on architectural modifications or optimizer adjustments. However, these approaches lack systematic optimization of weight patterns during training. Weight pattern refers to the distribution and relative magnitudes of weight parameters in a neural network. To address this issue, we propose a Weight Scaling method called WISCA to enhance training efficiency and model quality by strategically improving neural network weight patterns without changing network structures. By rescaling weights while preserving model outputs, WISCA indirectly optimizes the model's training trajectory. Experiments demonstrate that WISCA significantly improves convergence quality (measured by generalization capability and loss reduction), particularly in LLMs with Grouped Query Attention (GQA) architectures and LoRA fine-tuning tasks. Empirical results show 5.6% average improvement on zero-shot validation tasks and 2.12% average reduction in training perplexity across multiple architectures.

View on arXiv
Comments on this paper