14
115

M3^3IT: A Large-Scale Dataset towards Multi-Modal Multilingual Instruction Tuning

Lei Li
Yuwei Yin
Shicheng Li
Liang Chen
Peiyi Wang
Shuhuai Ren
Mukai Li
Yazheng Yang
Jingjing Xu
Xu Sun
Lingpeng Kong
Qi Liu
Abstract

Instruction tuning has significantly advanced large language models (LLMs) such as ChatGPT, enabling them to align with human instructions across diverse tasks. However, progress in open vision-language models (VLMs) has been limited due to the scarcity of high-quality instruction datasets. To tackle this challenge and promote research in the vision-language field, we introduce the Multi-Modal, Multilingual Instruction Tuning (M3^3IT) dataset, designed to optimize VLM alignment with human instructions. Our M3^3IT dataset comprises 40 carefully curated datasets, including 2.4 million instances and 400 manually written task instructions, reformatted into a vision-to-text structure. Key tasks are translated into 80 languages with an advanced translation system, ensuring broader accessibility. M3^3IT surpasses previous datasets regarding task coverage, instruction number and instance scale. Moreover, we develop Ying-VLM, a VLM model trained on our M3^3IT dataset, showcasing its potential to answer complex questions requiring world knowledge, generalize to unseen video tasks, and comprehend unseen instructions in Chinese. We have open-sourced the dataset to encourage further research.

View on arXiv
Comments on this paper