9

Training Report of TeleChat3-MoE

Xinzhang Liu
Chao Wang
Zhihao Yang
Zhuo Jiang
Xuncheng Zhao
Haoran Wang
Lei Li
Dongdong He
Luobin Liu
Kaizhe Yuan
Han Gao
Zihan Wang
Yitong Yao
Sishi Xiong
Wenmin Deng
Haowei He
Kaidong Yu
Yu Zhao
Ruiyu Fang
Yuhao Jiang
Yingyan Li
Xiaohui Hu
Xi Yu
Jingqi Li
Yanwei Liu
Qingli Li
Xinyu Shi
Junhao Niu
Chengnuo Huang
Yao Xiao
Ruiwen Wang
Fengkai Li
Luwen Pu
Kaipeng Jia
Fubei Yao
Yuyao Huang
Xuewei He
Zhuoru Jiang
Ruiting Song
Rui Xue
Qiyi Xie
Jie Zhang
Zilu Huang
Zhaoxi Zhang
Zhilong Lu
Yanhan Zhang
Yin Zhang
Yanlei Xue
Zhu Yuan
Teng Su
Xin Jiang
Shuangyong Song
Yongxiang Li
Xuelong Li
Main:15 Pages
10 Figures
Bibliography:3 Pages
5 Tables
Abstract

TeleChat3-MoE is the latest series of TeleChat large language models, featuring a Mixture-of-Experts (MoE) architecture with parameter counts ranging from 105 billion to over one trillion,trained end-to-end on Ascend NPU cluster. This technical report mainly presents the underlying training infrastructure that enables reliable and efficient scaling to frontier model sizes. We detail systematic methodologies for operator-level and end-to-end numerical accuracy verification, ensuring consistency across hardware platforms and distributed parallelism strategies. Furthermore, we introduce a suite of performance optimizations, including interleaved pipeline scheduling, attention-aware data scheduling for long-sequence training,hierarchical and overlapped communication for expert parallelism, and DVM-based operator fusion. A systematic parallelization framework, leveraging analytical estimation and integer linear programming, is also proposed to optimize multi-dimensional parallelism configurations. Additionally, we present methodological approaches to cluster-level optimizations, addressing host- and device-bound bottlenecks during large-scale training tasks. These infrastructure advancements yield significant throughput improvements and near-linear scaling on clusters comprising thousands of devices, providing a robust foundation for large-scale language model development on hardware ecosystems.

View on arXiv
Comments on this paper