Curriculum Learning: A Regularization Method for Efficient and Stable
Billion-Scale GPT Model Pre-Training
Recent works have demonstrated great success in training large autoregressive language models (e.g., GPT-3) on unlabeled text corpus for text generation. To reduce their expensive training cost, practitioners attempt to increase the batch sizes and learning rates. However, increasing them often cause training instabilities and poor generalization. On the other side, using smaller batch sizes or learning rates would reduce the training efficiency, significantly increasing training time and cost. We investigate this stability-efficiency dilemma and identify that long sequence length is one of the main causes of training instability in large-scale GPT model pre-training. Based on our analysis, we present a novel sequence length warmup method that simultaneously improves training stability and efficiency. As a kind of curriculum learning approach, our method improves the training convergence speed of autoregressive models. More importantly, our in-depth analysis shows that our method exerts a gradient variance reduction effect and regularizes early stages of training where the amount of training data is much smaller than the model capacity. This enables stable training with much larger batch sizes and learning rates, further improving the training speed. Evaluations show that our approach enables stable GPT-2 (117M and 1.5B) pre-training with 8x larger batch size and 4x larger learning rate, whereas the baseline approach struggles with training instability. To achieve the same or better zero-shot WikiText-103/LAMBADA evaluation results, our approach reduces the required number of pre-training tokens and wall clock time by up to 55\% and 73\%, respectively.
View on arXiv