65
1

The Sharpness Disparity Principle in Transformers for Accelerating Language Model Pre-Training

Abstract

Transformers consist of diverse building blocks, such as embedding layers, normalization layers, self-attention mechanisms, and point-wise feedforward networks. Thus, understanding the differences and interactions among these blocks is important. In this paper, we uncover a clear Sharpness Disparity across these blocks, which emerges early in training and intriguingly persists throughout the training process. Motivated by this finding, we propose Blockwise Learning Rate (LR), a strategy that tailors the LR to each block's sharpness, accelerating large language model (LLM) pre-training. By integrating Blockwise LR into AdamW, we consistently achieve lower terminal loss and nearly 2×2\times speedup compared to vanilla AdamW. We demonstrate this acceleration across GPT-2 and LLaMA, with model sizes ranging from 0.12B to 1.1B and datasets of OpenWebText and MiniPile. Finally, we incorporate Blockwise LR into Adam-mini (Zhang et al., 2024), a recently proposed memory-efficient variant of Adam, achieving a combined 2×2\times speedup and 2×2\times memory saving. These results underscore the potential of exploiting the sharpness disparity to improve LLM training.

View on arXiv
@article{wang2025_2502.19002,
  title={ The Sharpness Disparity Principle in Transformers for Accelerating Language Model Pre-Training },
  author={ Jinbo Wang and Mingze Wang and Zhanpeng Zhou and Junchi Yan and Weinan E and Lei Wu },
  journal={arXiv preprint arXiv:2502.19002},
  year={ 2025 }
}
Comments on this paper