27
0

OVERLORD: Ultimate Scaling of DataLoader for Multi-Source Large Foundation Model Training

Abstract

Modern frameworks for training large foundation models (LFMs) employ data loaders in a data parallel paradigm. While this design offers implementation simplicity, it introduces two fundamental challenges. First, due to the quadratic computational complexity of the attention operator, the non-uniform sample distribution over data-parallel ranks leads to a significant workload imbalance among loaders, which degrades the training efficiency. This paradigm also impedes the implementation of data mixing algorithms (e.g., curriculum learning) over different datasets. Second, to acquire a broad range of capability, LFMs training ingests data from diverse sources, each with distinct file access states. Colocating massive datasets within loader instances can easily exceed local pod memory capacity. Additionally, heavy sources with higher transformation latency require larger worker pools, further exacerbating memory consumption.We present OVERLORD, an industrial-grade distributed data loading architecture with three innovations: (1) A centralized and declarative data plane, which facilitates elastic data orchestration strategy, such as long-short context, multimodal, and curriculum learning; (2) Disaggregated multisource preprocessing through role-specific actors, i.e., Source Loaders and Data Constructors, leveraging autoscaling for Source Loaders towards heterogeneous and evolving source preprocessing cost; (3) Shadow Loaders with differential checkpointing for uninterrupted fault recovery. Deployed on production clusters scaling to multi-thousand GPU, OVERLORD achieves: (1) 4.5x end-to-end training throughput improvement, (2) a minimum 3.6x reduction in CPU memory usage, with further improvements to be added in later experiments.

View on arXiv
@article{zhao2025_2504.09844,
  title={ OVERLORD: Ultimate Scaling of DataLoader for Multi-Source Large Foundation Model Training },
  author={ Juntao Zhao and Qi Lu and Wei Jia and Borui Wan and Lei Zuo and Junda Feng and Jianyu Jiang and Yangrui Chen and Shuaishuai Cao and Jialing He and Kaihua Jiang and Yuanzhe Hu and Yanghua Peng and Haibin Lin and Xin Liu and Chuan Wu },
  journal={arXiv preprint arXiv:2504.09844},
  year={ 2025 }
}
Comments on this paper