32
0
v1v2v3 (latest)

MINT: Multimodal Instruction Tuning with Multimodal Interaction Grouping

Main:9 Pages
7 Figures
Bibliography:7 Pages
3 Tables
Appendix:10 Pages
Abstract

Recent advances in multimodal foundation models have achieved state-of-the-art performance across a range of tasks. These breakthroughs are largely driven by new pre-training paradigms that leverage large-scale, unlabeled multimodal data, followed by instruction fine-tuning on curated labeled datasets and high-quality prompts. While there is growing interest in scaling instruction fine-tuning to ever-larger datasets in both quantity and scale, our findings reveal that simply increasing the number of instruction-tuning tasks does not consistently yield better performance. Instead, we observe that grouping tasks by the common interactions across modalities, such as discovering redundant shared information, prioritizing modality selection with unique information, or requiring synergistic fusion to discover new information from both modalities, encourages the models to learn transferrable skills within a group while suppressing interference from mismatched tasks. To this end, we introduce MINT, a simple yet surprisingly effective task-grouping strategy based on the type of multimodal interaction. We demonstrate that the proposed method greatly outperforms existing task grouping baselines for multimodal instruction tuning, striking an effective balance between generalization and specialization.

View on arXiv
@article{shan2025_2506.02308,
  title={ MINT: Multimodal Instruction Tuning with Multimodal Interaction Grouping },
  author={ Xiaojun Shan and Qi Cao and Xing Han and Haofei Yu and Paul Pu Liang },
  journal={arXiv preprint arXiv:2506.02308},
  year={ 2025 }
}
Comments on this paper