FRAMES: Boosting LLMs with A Four-Quadrant Multi-Stage Pretraining Strategy

Large language models (LLMs) have significantly advanced human language understanding and generation, with pretraining data quality and organization being crucial to their performance. Multi-stage pretraining is a promising approach, but existing methods often lack quantitative criteria for data partitioning and instead rely on intuitive heuristics. In this paper, we propose the novel Four-quadRAnt Multi-stage prEtraining strategy (FRAME), guided by the established principle of organizing the pretraining process into four stages to achieve significant loss reductions four times. This principle is grounded in two key findings: first, training on high Perplexity (PPL) data followed by low PPL data, and second, training on low PPL difference (PD) data followed by high PD data, both causing the loss to drop significantly twice and performance enhancements. By partitioning data into four quadrants and strategically organizing them, FRAME achieves a remarkable 16.8% average improvement over random across MMLU and CMMLU for the 3B model, effectively boosting LLM performance.
View on arXiv@article{zhang2025_2502.05551, title={ FRAMES: Boosting LLMs with A Four-Quadrant Multi-Stage Pretraining Strategy }, author={ Xuemiao Zhang and Feiyu Duan and Liangyu Xu and Yongwei Zhou and Sirui Wang and Rongxiang Weng and Jingang Wang and Xunliang Cai }, journal={arXiv preprint arXiv:2502.05551}, year={ 2025 } }