519
v1v2v3 (latest)

EE-MLLM: A Data-Efficient and Compute-Efficient Multimodal Large Language Model

Yizhou Zhou
Siying Wu
Fengyun Rao
Yueyi Zhang
Xiaoyan Sun
Main:8 Pages
6 Figures
Bibliography:2 Pages
7 Tables
Appendix:3 Pages
Abstract

Recent advancements in Multimodal Large Language Models (MLLMs) have demonstrated satisfactory performance across various vision-language tasks. Current approaches for vision and language interaction fall into two categories: self-attention-based and cross-attention-based methods. However, both approaches present inherent limitations, forcing a trade-off between data and computational efficiency. To address this issue, we introduce the Data-E\textbf{E}fficient and Compute-E\textbf{E}fficient MLLM\textbf{MLLM} (EE-MLLM\textbf{EE-MLLM}). Specifically, we modify the original self-attention mechanism in MLLM to a composite attention mechanism. This mechanism has two key characteristics: 1) eliminating the computational overhead of self-attention among visual tokens to achieve compute efficiency\textbf{compute efficiency}, and 2) reusing the weights from each layer of LLM to facilitate effective vision-language modality alignment for data efficiency\textbf{data efficiency}. As a result, EE-MLLM significantly outperforms Flamingo with limited training data, and reduces the prefilling time to 79 ms on an H800 GPU, compared to LLaVA's 277 ms. To further investigate the efficiency of EE-MLLM, we present a training-free variant named EE-MLLM-F, which reduces the computation cost of self-attention-based method without additional training. Experimental results demonstrate the effectiveness of EE-MLLM across a range of benchmarks, including general-purpose datasets like MMBench and SeedBench, as well as fine-grained tasks such as TextVQA and DocVQA.

View on arXiv
Comments on this paper