Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2206.03354
Cited By
cViL: Cross-Lingual Training of Vision-Language Models using Knowledge Distillation
7 June 2022
Kshitij Gupta
Devansh Gautam
R. Mamidi
VLM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"cViL: Cross-Lingual Training of Vision-Language Models using Knowledge Distillation"
4 / 4 papers shown
Title
Word Alignment by Fine-tuning Embeddings on Parallel Corpora
Zi-Yi Dou
Graham Neubig
90
255
0
20 Jan 2021
VinVL: Revisiting Visual Representations in Vision-Language Models
Pengchuan Zhang
Xiujun Li
Xiaowei Hu
Jianwei Yang
Lei Zhang
Lijuan Wang
Yejin Choi
Jianfeng Gao
ObjD
VLM
252
157
0
02 Jan 2021
LRC-BERT: Latent-representation Contrastive Knowledge Distillation for Natural Language Understanding
Hao Fu
Shaojun Zhou
Qihong Yang
Junjie Tang
Guiquan Liu
Kaikui Liu
Xiaolong Li
25
56
0
14 Dec 2020
Unified Vision-Language Pre-Training for Image Captioning and VQA
Luowei Zhou
Hamid Palangi
Lei Zhang
Houdong Hu
Jason J. Corso
Jianfeng Gao
MLLM
VLM
250
922
0
24 Sep 2019
1