72
0

EDiT: A Local-SGD-Based Efficient Distributed Training Method for Large Language Models

Abstract

Distributed training methods are crucial for large language models (LLMs). However, existing distributed training methods often suffer from communication bottlenecks, stragglers, and limited elasticity, particularly in heterogeneous or large-scale environments. Local SGD methods have been proposed to address these issues, but their effectiveness remains limited to small-scale training due to additional memory overhead and lack of concerns on efficiency and stability. To tackle these issues, we propose EDiT, an innovative Efficient Distributed Training method that combines a tailored Local SGD approach with model sharding techniques to enhance large-scale training efficiency. EDiT performs layer-wise parameter synchronization during forward pass, reducing communication and memory overhead and enabling overlap. Besides, EDiT employs a pseudo gradient penalty strategy to suppress loss spikes, which ensures training stability and improves performance. Additionally, we introduce A-EDiT, a fully asynchronous variant of EDiT that accommodates heterogeneous clusters. Building on EDiT/A-EDiT, we conduct a series of experiments to validate large-scale asynchronous training for LLMs, accompanied by comprehensive analyses. Experimental results demonstrate the superior performance of EDiT/A-EDiT, establishing them as robust solutions for distributed LLM training in diverse computational ecosystems. The code is available at Atorch codebase:this https URL.

View on arXiv
@article{cheng2025_2412.07210,
  title={ EDiT: A Local-SGD-Based Efficient Distributed Training Method for Large Language Models },
  author={ Jialiang Cheng and Ning Gao and Yun Yue and Zhiling Ye and Jiadi Jiang and Jian Sha },
  journal={arXiv preprint arXiv:2412.07210},
  year={ 2025 }
}
Comments on this paper