ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2103.00823
21
132

M6: A Chinese Multimodal Pretrainer

1 March 2021
Junyang Lin
Rui Men
An Yang
Chan Zhou
Ming Ding
Yichang Zhang
Peng Wang
Ang Wang
Le Jiang
Xianyan Jia
Jie M. Zhang
Jianwei Zhang
Xu Zou
Zhikang Li
X. Deng
Jie Liu
J. Xue
Huiling Zhou
Jianxin Ma
Jin Yu
Yong Li
Wei Lin
Jingren Zhou
J. Tang
Hongxia Yang
    VLM
    MoE
ArXivPDFHTML
Abstract

In this work, we construct the largest dataset for multimodal pretraining in Chinese, which consists of over 1.9TB images and 292GB texts that cover a wide range of domains. We propose a cross-modal pretraining method called M6, referring to Multi-Modality to Multi-Modality Multitask Mega-transformer, for unified pretraining on the data of single modality and multiple modalities. We scale the model size up to 10 billion and 100 billion parameters, and build the largest pretrained model in Chinese. We apply the model to a series of downstream applications, and demonstrate its outstanding performance in comparison with strong baselines. Furthermore, we specifically design a downstream task of text-guided image generation, and show that the finetuned M6 can create high-quality images with high resolution and abundant details.

View on arXiv
Comments on this paper