7

HetCCL: Accelerating LLM Training with Heterogeneous GPUs

Heehoon Kim
Jaehwan Lee
Taejeoung Kim
Jongwon Park
Jinpyo Kim
Pyongwon Suh
Ryan H. Choi
Sangwoo Lee
Jaejin Lee
Main:8 Pages
17 Figures
Bibliography:4 Pages
6 Tables
Appendix:10 Pages
Abstract

The rapid growth of large language models is driving organizations to expand their GPU clusters, often with GPUs from multiple vendors. However, current deep learning frameworks lack support for collective communication across heterogeneous GPUs, leading to inefficiency and higher costs. We present HetCCL, a collective communication library that unifies vendor-specific backends and enables RDMA-based communication across GPUs without requiring driver modifications. HetCCL introduces two novel mechanisms that enable cross-vendor communication while leveraging optimized vendor libraries, NVIDIA NCCL and AMD RCCL. Evaluations on a multi-vendor GPU cluster show that HetCCL matches NCCL and RCCL performance in homogeneous setups while uniquely scaling in heterogeneous environments, enabling practical, high-performance training with both NVIDIA and AMD GPUs without changes to existing deep learning applications.

View on arXiv
Comments on this paper