21
3

EE-MLLM: A Data-Efficient and Compute-Efficient Multimodal Large Language Model

Feipeng Ma
Yizhou Zhou
Hebei Li
Zilong He
Siying Wu
Fengyun Rao
Siying Wu
Fengyun Rao
Yueyi Zhang
Xiaoyan Sun
Abstract

Recent advancements in Multimodal Large Language Models (MLLMs) have demonstrated satisfactory performance across various vision-language tasks. Current approaches for vision and language interaction fall into two categories: self-attention-based and cross-attention-based methods. However, both approaches present inherent limitations, forcing a trade-off between data and computational efficiency. To address this issue, we introduce the Data-E\textbf{E}fficient and Compute-E\textbf{E}fficient MLLM\textbf{MLLM} (EE-MLLM\textbf{EE-MLLM}). Specifically, we modify the original self-attention mechanism in MLLM to a composite attention mechanism. This mechanism has two key characteristics: 1) eliminating the computational overhead of self-attention among visual tokens to achieve compute efficiency\textbf{compute efficiency}, and 2) reusing the weights from each layer of LLM to facilitate effective vision-language modality alignment for data efficiency\textbf{data efficiency}. As a result, EE-MLLM significantly outperforms Flamingo with limited training data, and reduces the prefilling time to 79 ms on an H800 GPU, compared to LLaVA's 277 ms. To further investigate the efficiency of EE-MLLM, we present a training-free variant named EE-MLLM-F, which reduces the computation cost of self-attention-based method without additional training. Experimental results demonstrate the effectiveness of EE-MLLM across a range of benchmarks, including general-purpose datasets like MMBench and SeedBench, as well as fine-grained tasks such as TextVQA and DocVQA.

View on arXiv
@article{ma2025_2408.11795,
  title={ EE-MLLM: A Data-Efficient and Compute-Efficient Multimodal Large Language Model },
  author={ Feipeng Ma and Yizhou Zhou and Zheyu Zhang and Shilin Yan and Hebei Li and Zilong He and Siying Wu and Fengyun Rao and Yueyi Zhang and Xiaoyan Sun },
  journal={arXiv preprint arXiv:2408.11795},
  year={ 2025 }
}
Comments on this paper