48
1

Orchestrate Multimodal Data with Batch Post-Balancing to Accelerate Multimodal Large Language Model Training

Abstract

Multimodal large language models (MLLMs), such as GPT-4o, are garnering significant attention. During the exploration of MLLM training, we identified Modality Composition Incoherence, a phenomenon that the proportion of a certain modality varies dramatically across different examples. It exacerbates the challenges of addressing mini-batch imbalances, which lead to uneven GPU utilization between Data Parallel (DP) instances and severely degrades the efficiency and scalability of MLLM training, ultimately affecting training speed and hindering further research on MLLMs.To address these challenges, we introduce OrchMLLM, a comprehensive framework designed to mitigate the inefficiencies in MLLM training caused by Modality Composition Incoherence. First, we propose Batch Post-Balancing Dispatcher, a technique that efficiently eliminates mini-batch imbalances in sequential data. Additionally, we integrate MLLM Global Orchestrator into the training framework to orchestrate multimodal data and tackle the issues arising from Modality Composition Incoherence. We evaluate OrchMLLM across various MLLM sizes, demonstrating its efficiency and scalability. Experimental results reveal that OrchMLLM achieves a Model FLOPs Utilization (MFU) of 41.6%41.6\% when training an 84B MLLM with three modalities on 25602560 H100 GPUs, outperforming Megatron-LM by up to 3.1×3.1\times in throughput.

View on arXiv
@article{zheng2025_2503.23830,
  title={ Orchestrate Multimodal Data with Batch Post-Balancing to Accelerate Multimodal Large Language Model Training },
  author={ Yijie Zheng and Bangjun Xiao and Lei Shi and Xiaoyang Li and Faming Wu and Tianyu Li and Xuefeng Xiao and Yang Zhang and Yuxuan Wang and Shouda Liu },
  journal={arXiv preprint arXiv:2503.23830},
  year={ 2025 }
}
Comments on this paper