Enabling Elastic Model Serving with MultiWorld

Machine learning models have been exponentially growing in terms of their parameter size over the past few years. We are now seeing the rise of trillion-parameter models. The large models cannot fit into a single GPU and thus require partitioned deployment across GPUs and even hosts. A high-performance collective communication library (CCL) such as NCCL is essential to fully utilize expensive GPU resources. However, CCL is not a great fit for inference. Unlike training for which a fixed amount of GPU resources is used for fixed workloads (e.g., input datasets), the inference workloads can change dynamically over time. Failures at the serving time can also impact individual user's experiences directly. In contrast, workers in a CCL process group share a single fault domain and the process group cannot grow as the workloads increase. The gap between the unique characteristics of model serving and CCL's nature makes it hard to serve large models elastically. To bridge the gap, we propose MultiWorld that enables fault tolerance and online scaling at the granularity of workers for model serving. Our evaluation showcases that enabling these new functionalities incurs small overheads (1.4-4.3% throughput loss) for most of the scenarios we tested.
View on arXiv