108

Addressing Model and Data Heterogeneity in Multimodal Large Language Model Training

Conference on Applications, Technologies, Architectures, and Protocols for Computer Communication (SIGCOMM), 2024
Main:11 Pages
24 Figures
Bibliography:3 Pages
3 Tables
Appendix:1 Pages
Abstract

Multimodal large language models (LLMs) have demonstrated significant potential in a wide range of AI applications. Yet, training multimodal LLMs suffers from low efficiency and scalability, due to the inherent model heterogeneity and data heterogeneity across different modalities. We present MMScale, an efficient and adaptive framework to reform the training of multimodal large language models on large-scale clusters. MMScale exploits the system characteristics of multimodal LLM training to achieve high efficiency and scalability. The core of MMScale is the adaptive resource allocation and data-aware reordering techniques to eliminate the model and data heterogeneity respectively. We also tailor system optimizations for multimodal LLM training to offload certain operations from the GPU training. We evaluate MMScale across different sizes of multimodal LLMs on a large-scale production cluster with thousands of GPUs. The experimental results show that MMScale achieves 54.7% Model FLOPs Utilization (MFU) when training a 72B multimodal LLM on 1172 GPUs and outperforms Megatron-LM by up to 2.2×\times on throughput. The ablation study shows the main techniques of MMScale are both effective and lightweight.

View on arXiv
Comments on this paper