26
0

Re-Imagining Multimodal Instruction Tuning: A Representation View

Abstract

Multimodal instruction tuning has proven to be an effective strategy for achieving zero-shot generalization by fine-tuning pre-trained Large Multimodal Models (LMMs) with instruction-following data. However, as the scale of LMMs continues to grow, fully fine-tuning these models has become highly parameter-intensive. Although Parameter-Efficient Fine-Tuning (PEFT) methods have been introduced to reduce the number of tunable parameters, a significant performance gap remains compared to full fine-tuning. Furthermore, existing PEFT approaches are often highly parameterized, making them difficult to interpret and control. In light of this, we introduce Multimodal Representation Tuning (MRT), a novel approach that focuses on directly editing semantically rich multimodal representations to achieve strong performance and provide intuitive control over LMMs. Empirical results show that our method surpasses current state-of-the-art baselines with significant performance gains (e.g., 1580.40 MME score) while requiring substantially fewer tunable parameters (e.g., 0.03% parameters). Additionally, we conduct experiments on editing instrumental tokens within multimodal representations, demonstrating that direct manipulation of these representations enables simple yet effective control over network behavior.

View on arXiv
@article{liu2025_2503.00723,
  title={ Re-Imagining Multimodal Instruction Tuning: A Representation View },
  author={ Yiyang Liu and James Chenhao Liang and Ruixiang Tang and Yugyung Lee and Majid Rabbani and Sohail Dianat and Raghuveer Rao and Lifu Huang and Dongfang Liu and Qifan Wang and Cheng Han },
  journal={arXiv preprint arXiv:2503.00723},
  year={ 2025 }
}
Comments on this paper