Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2203.11480
Cited By
WuDaoMM: A large-scale Multi-Modal Dataset for Pre-training models
22 March 2022
Shan Yuan
Shuai Zhao
Jiahong Leng
Zhao Xue
Hanyu Zhao
Peiyu Liu
Zheng Gong
Wayne Xin Zhao
Junyi Li
Tang Jie
VLM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"WuDaoMM: A large-scale Multi-Modal Dataset for Pre-training models"
5 / 5 papers shown
Title
Trustworthy Federated Learning: Privacy, Security, and Beyond
Chunlu Chen
Ji Liu
Haowen Tan
Xingjian Li
Kevin I-Kai Wang
Peng Li
Kouichi Sakurai
Dejing Dou
FedML
52
3
0
03 Nov 2024
OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents
Hugo Laurenccon
Lucile Saulnier
Léo Tronchon
Stas Bekman
Amanpreet Singh
...
Siddharth Karamcheti
Alexander M. Rush
Douwe Kiela
Matthieu Cord
Victor Sanh
25
230
0
21 Jun 2023
Large-scale Multi-Modal Pre-trained Models: A Comprehensive Survey
Xiao Wang
Guangyao Chen
Guangwu Qian
Pengcheng Gao
Xiaoyong Wei
Yaowei Wang
Yonghong Tian
Wen Gao
AI4CE
VLM
26
200
0
20 Feb 2023
Zero-Shot Text-to-Image Generation
Aditya A. Ramesh
Mikhail Pavlov
Gabriel Goh
Scott Gray
Chelsea Voss
Alec Radford
Mark Chen
Ilya Sutskever
VLM
253
4,774
0
24 Feb 2021
Conceptual 12M: Pushing Web-Scale Image-Text Pre-Training To Recognize Long-Tail Visual Concepts
Soravit Changpinyo
P. Sharma
Nan Ding
Radu Soricut
VLM
273
1,081
0
17 Feb 2021
1