ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.24275
39
0

GradPower: Powering Gradients for Faster Language Model Pre-Training

30 May 2025
Mingze Wang
Jinbo Wang
Jiaqi Zhang
Wei Wang
Peng Pei
Xunliang Cai
Weinan E
Lei Wu
ArXiv (abs)PDFHTML
Main:9 Pages
8 Figures
Bibliography:5 Pages
2 Tables
Appendix:8 Pages
Abstract

We propose GradPower, a lightweight gradient-transformation technique for accelerating language model pre-training. Given a gradient vector g=(gi)ig=(g_i)_ig=(gi​)i​, GradPower first applies the elementwise sign-power transformation: φp(g)=(sign(gi)∣gi∣p)i\varphi_p(g)=({\rm sign}(g_i)|g_i|^p)_{i}φp​(g)=(sign(gi​)∣gi​∣p)i​ for a fixed p>0p>0p>0, and then feeds the transformed gradient into a base optimizer. Notably, GradPower requires only a single-line code change and no modifications to the base optimizer's internal logic, including the hyperparameters. When applied to Adam (termed AdamPower), GradPower consistently achieves lower terminal loss across diverse architectures (LLaMA, Qwen2MoE), parameter scales (66M to 2B), datasets (C4, OpenWebText), and learning-rate schedules (cosine, warmup-stable-decay). The most pronounced gains are observed when training modern mixture-of-experts models with warmup-stable-decay schedules. GradPower also integrates seamlessly with other state-of-the-art optimizers, such as Muon, yielding further improvements. Finally, we provide theoretical analyses that reveal the underlying mechanism of GradPower and highlights the influence of gradient noise.

View on arXiv
@article{wang2025_2505.24275,
  title={ GradPower: Powering Gradients for Faster Language Model Pre-Training },
  author={ Mingze Wang and Jinbo Wang and Jiaqi Zhang and Wei Wang and Peng Pei and Xunliang Cai and Weinan E and Lei Wu },
  journal={arXiv preprint arXiv:2505.24275},
  year={ 2025 }
}
Comments on this paper