ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2107.07651
  4. Cited By
Align before Fuse: Vision and Language Representation Learning with
  Momentum Distillation

Align before Fuse: Vision and Language Representation Learning with Momentum Distillation

16 July 2021
Junnan Li
Ramprasaath R. Selvaraju
Akhilesh Deepak Gotmare
Shafiq R. Joty
Caiming Xiong
S. Hoi
    FaML
ArXivPDFHTML

Papers citing "Align before Fuse: Vision and Language Representation Learning with Momentum Distillation"

50 / 1,192 papers shown
Title
MXM-CLR: A Unified Framework for Contrastive Learning of Multifold
  Cross-Modal Representations
MXM-CLR: A Unified Framework for Contrastive Learning of Multifold Cross-Modal Representations
Ye Wang
Bo‐Shu Jiang
C. Zou
Rui Ma
22
5
0
20 Mar 2023
IMF: Interactive Multimodal Fusion Model for Link Prediction
IMF: Interactive Multimodal Fusion Model for Link Prediction
Xinhang Li
Xiangyu Zhao
Jiaxing Xu
Yong Zhang
Chunxiao Xing
32
39
0
20 Mar 2023
DeAR: Debiasing Vision-Language Models with Additive Residuals
DeAR: Debiasing Vision-Language Models with Additive Residuals
Ashish Seth
Mayur Hemani
Chirag Agarwal
VLM
10
53
0
18 Mar 2023
Dynamic Graph Enhanced Contrastive Learning for Chest X-ray Report
  Generation
Dynamic Graph Enhanced Contrastive Learning for Chest X-ray Report Generation
Mingjie Li
Bingqian Lin
Zicong Chen
Haokun Lin
Xiaodan Liang
Xiaojun Chang
MedIm
15
106
0
18 Mar 2023
Investigating the Role of Attribute Context in Vision-Language Models
  for Object Recognition and Detection
Investigating the Role of Attribute Context in Vision-Language Models for Object Recognition and Detection
Kyle Buettner
Adriana Kovashka
20
0
0
17 Mar 2023
VEIL: Vetting Extracted Image Labels from In-the-Wild Captions for
  Weakly-Supervised Object Detection
VEIL: Vetting Extracted Image Labels from In-the-Wild Captions for Weakly-Supervised Object Detection
Arushi Rai
Adriana Kovashka
19
0
0
16 Mar 2023
MultiModal Bias: Introducing a Framework for Stereotypical Bias
  Assessment beyond Gender and Race in Vision Language Models
MultiModal Bias: Introducing a Framework for Stereotypical Bias Assessment beyond Gender and Race in Vision Language Models
Sepehr Janghorbani
Gerard de Melo
VLM
36
10
0
16 Mar 2023
Data Roaming and Quality Assessment for Composed Image Retrieval
Data Roaming and Quality Assessment for Composed Image Retrieval
Matan Levy
Rami Ben-Ari
N. Darshan
Dani Lischinski
27
23
0
16 Mar 2023
GridCLIP: One-Stage Object Detection by Grid-Level CLIP Representation
  Learning
GridCLIP: One-Stage Object Detection by Grid-Level CLIP Representation Learning
Jiaying Lin
S. Gong
VLM
CLIP
ObjD
17
22
0
16 Mar 2023
Facial Affect Recognition based on Transformer Encoder and Audiovisual
  Fusion for the ABAW5 Challenge
Facial Affect Recognition based on Transformer Encoder and Audiovisual Fusion for the ABAW5 Challenge
Ziyang Zhang
Liuwei An
Zishun Cui
Ao Xu
Tengteng Dong
Yueqi Jiang
Jingyi Shi
Xin Liu
Xiao Sun
Meng Wang
CVBM
23
20
0
16 Mar 2023
Unified Visual Relationship Detection with Vision and Language Models
Unified Visual Relationship Detection with Vision and Language Models
Long Zhao
Liangzhe Yuan
Boqing Gong
Yin Cui
Florian Schroff
Ming Yang
Hartwig Adam
Ting Liu
ObjD
25
9
0
16 Mar 2023
Efficient Image-Text Retrieval via Keyword-Guided Pre-Screening
Efficient Image-Text Retrieval via Keyword-Guided Pre-Screening
Min Cao
Yang Bai
Jingyao Wang
Ziqiang Cao
Liqiang Nie
Min Zhang
20
0
0
14 Mar 2023
PMC-CLIP: Contrastive Language-Image Pre-training using Biomedical
  Documents
PMC-CLIP: Contrastive Language-Image Pre-training using Biomedical Documents
Weixiong Lin
Ziheng Zhao
Xiaoman Zhang
Chaoyi Wu
Ya-Qin Zhang
Yanfeng Wang
Weidi Xie
LM&MA
VLM
MedIm
12
142
0
13 Mar 2023
Scaling Vision-Language Models with Sparse Mixture of Experts
Scaling Vision-Language Models with Sparse Mixture of Experts
Sheng Shen
Z. Yao
Chunyuan Li
Trevor Darrell
Kurt Keutzer
Yuxiong He
VLM
MoE
11
62
0
13 Mar 2023
Contextually-rich human affect perception using multimodal scene
  information
Contextually-rich human affect perception using multimodal scene information
Digbalay Bose
Rajat Hebbar
Krishna Somandepalli
Shrikanth Narayanan
15
3
0
13 Mar 2023
Understanding and Constructing Latent Modality Structures in Multi-modal
  Representation Learning
Understanding and Constructing Latent Modality Structures in Multi-modal Representation Learning
Qian Jiang
Changyou Chen
Han Zhao
Liqun Chen
Q. Ping
S. D. Tran
Yi Xu
Belinda Zeng
Trishul M. Chilimbi
41
38
0
10 Mar 2023
Tag2Text: Guiding Vision-Language Model via Image Tagging
Tag2Text: Guiding Vision-Language Model via Image Tagging
Xinyu Huang
Youcai Zhang
Jinyu Ma
Weiwei Tian
Rui Feng
Yuejie Zhang
Yaqian Li
Yandong Guo
Lei Zhang
CLIP
MLLM
VLM
3DV
61
74
0
10 Mar 2023
Refined Vision-Language Modeling for Fine-grained Multi-modal
  Pre-training
Refined Vision-Language Modeling for Fine-grained Multi-modal Pre-training
Lisai Zhang
Qingcai Chen
Zhijian Chen
Yunpeng Han
Zhonghua Li
Zhao Cao
VLM
25
1
0
09 Mar 2023
A Comprehensive Survey of AI-Generated Content (AIGC): A History of
  Generative AI from GAN to ChatGPT
A Comprehensive Survey of AI-Generated Content (AIGC): A History of Generative AI from GAN to ChatGPT
Yihan Cao
Siyu Li
Yixin Liu
Zhiling Yan
Yutong Dai
Philip S. Yu
Lichao Sun
24
501
0
07 Mar 2023
Your representations are in the network: composable and parallel
  adaptation for large scale models
Your representations are in the network: composable and parallel adaptation for large scale models
Yonatan Dukler
Alessandro Achille
Hao-Yu Yang
Varsha Vivek
L. Zancato
Benjamin Bowman
Avinash Ravichandran
Charless C. Fowlkes
A. Swaminathan
Stefano Soatto
16
3
0
07 Mar 2023
Multimodal Prompting with Missing Modalities for Visual Recognition
Multimodal Prompting with Missing Modalities for Visual Recognition
Yi-Lun Lee
Yi-Hsuan Tsai
Wei-Chen Chiu
Chen-Yu Lee
VPVLM
22
93
0
06 Mar 2023
DeCap: Decoding CLIP Latents for Zero-Shot Captioning via Text-Only
  Training
DeCap: Decoding CLIP Latents for Zero-Shot Captioning via Text-Only Training
Wei Li
Linchao Zhu
Longyin Wen
Yi Yang
VLM
40
86
0
06 Mar 2023
HiCLIP: Contrastive Language-Image Pretraining with Hierarchy-aware
  Attention
HiCLIP: Contrastive Language-Image Pretraining with Hierarchy-aware Attention
Shijie Geng
Jianbo Yuan
Yu Tian
Yuxiao Chen
Yongfeng Zhang
CLIP
VLM
41
44
0
06 Mar 2023
CLIP-guided Prototype Modulating for Few-shot Action Recognition
CLIP-guided Prototype Modulating for Few-shot Action Recognition
Xiang Wang
Shiwei Zhang
Jun Cen
Changxin Gao
Yingya Zhang
Deli Zhao
Nong Sang
VLM
11
53
0
06 Mar 2023
Knowledge-Based Counterfactual Queries for Visual Question Answering
Knowledge-Based Counterfactual Queries for Visual Question Answering
Theodoti Stoikou
Maria Lymperaiou
Giorgos Stamou
AAML
13
1
0
05 Mar 2023
Prismer: A Vision-Language Model with Multi-Task Experts
Prismer: A Vision-Language Model with Multi-Task Experts
Shikun Liu
Linxi Fan
Edward Johns
Zhiding Yu
Chaowei Xiao
Anima Anandkumar
VLM
MLLM
34
21
0
04 Mar 2023
FAME-ViL: Multi-Tasking Vision-Language Model for Heterogeneous Fashion
  Tasks
FAME-ViL: Multi-Tasking Vision-Language Model for Heterogeneous Fashion Tasks
Xiaoping Han
Xiatian Zhu
Licheng Yu
Li Zhang
Yi-Zhe Song
Tao Xiang
VLM
16
38
0
04 Mar 2023
PixMIM: Rethinking Pixel Reconstruction in Masked Image Modeling
PixMIM: Rethinking Pixel Reconstruction in Masked Image Modeling
Yuan Liu
Songyang Zhang
Jiacheng Chen
Kai-xiang Chen
Dahua Lin
67
27
0
04 Mar 2023
MixPHM: Redundancy-Aware Parameter-Efficient Tuning for Low-Resource
  Visual Question Answering
MixPHM: Redundancy-Aware Parameter-Efficient Tuning for Low-Resource Visual Question Answering
Jingjing Jiang
Nanning Zheng
MoE
32
6
0
02 Mar 2023
Image Labels Are All You Need for Coarse Seagrass Segmentation
Image Labels Are All You Need for Coarse Seagrass Segmentation
Scarlett Raine
Ross Marchant
Branislav Kusy
Frederic Maire
Tobias Fischer
22
4
0
02 Mar 2023
RAMM: Retrieval-augmented Biomedical Visual Question Answering with
  Multi-modal Pre-training
RAMM: Retrieval-augmented Biomedical Visual Question Answering with Multi-modal Pre-training
Zheng Yuan
Qiao Jin
Chuanqi Tan
Zhengyun Zhao
Hongyi Yuan
Fei Huang
Songfang Huang
44
27
0
01 Mar 2023
Towards Generalisable Video Moment Retrieval: Visual-Dynamic Injection
  to Image-Text Pre-Training
Towards Generalisable Video Moment Retrieval: Visual-Dynamic Injection to Image-Text Pre-Training
Dezhao Luo
Jiabo Huang
S. Gong
Hailin Jin
Yang Liu
VGen
21
28
0
28 Feb 2023
Vid2Seq: Large-Scale Pretraining of a Visual Language Model for Dense
  Video Captioning
Vid2Seq: Large-Scale Pretraining of a Visual Language Model for Dense Video Captioning
Antoine Yang
Arsha Nagrani
Paul Hongsuck Seo
Antoine Miech
Jordi Pont-Tuset
Ivan Laptev
Josef Sivic
Cordelia Schmid
AI4TS
VLM
23
220
0
27 Feb 2023
Knowledge-enhanced Visual-Language Pre-training on Chest Radiology
  Images
Knowledge-enhanced Visual-Language Pre-training on Chest Radiology Images
Xiaoman Zhang
Chaoyi Wu
Ya-Qin Zhang
Yanfeng Wang
Weidi Xie
MedIm
24
119
0
27 Feb 2023
Aligning Bag of Regions for Open-Vocabulary Object Detection
Aligning Bag of Regions for Open-Vocabulary Object Detection
Size Wu
Wenwei Zhang
Sheng Jin
Wentao Liu
Chen Change Loy
VLM
ObjD
42
108
0
27 Feb 2023
Contrastive Video Question Answering via Video Graph Transformer
Contrastive Video Question Answering via Video Graph Transformer
Junbin Xiao
Pan Zhou
Angela Yao
Yicong Li
Richang Hong
Shuicheng Yan
Tat-Seng Chua
ViT
19
35
0
27 Feb 2023
Improving Medical Speech-to-Text Accuracy with Vision-Language
  Pre-training Model
Improving Medical Speech-to-Text Accuracy with Vision-Language Pre-training Model
Jaeyoung Huh
Sangjoon Park
Jeonghyeon Lee
Jong Chul Ye
LM&MA
9
9
0
27 Feb 2023
Cross-modal Contrastive Learning for Multimodal Fake News Detection
Cross-modal Contrastive Learning for Multimodal Fake News Detection
Longzheng Wang
Chuang Zhang
Hongbo Xu
Yongxiu Xu
Xiaohan Xu
Siqi Wang
12
41
0
25 Feb 2023
Deep Learning for Video-Text Retrieval: a Review
Deep Learning for Video-Text Retrieval: a Review
Cunjuan Zhu
Qi Jia
Wei-Neng Chen
Yanming Guo
Yu Liu
24
14
0
24 Feb 2023
Modular Deep Learning
Modular Deep Learning
Jonas Pfeiffer
Sebastian Ruder
Ivan Vulić
E. Ponti
MoMe
OOD
19
73
0
22 Feb 2023
Focusing On Targets For Improving Weakly Supervised Visual Grounding
Focusing On Targets For Improving Weakly Supervised Visual Grounding
V. Pham
Nao Mishima
ObjD
19
1
0
22 Feb 2023
Test-Time Distribution Normalization for Contrastively Learned
  Vision-language Models
Test-Time Distribution Normalization for Contrastively Learned Vision-language Models
Yi Zhou
Juntao Ren
Fengyu Li
Ramin Zabih
Ser-Nam Lim
VLM
26
13
0
22 Feb 2023
Large-scale Multi-Modal Pre-trained Models: A Comprehensive Survey
Large-scale Multi-Modal Pre-trained Models: A Comprehensive Survey
Xiao Wang
Guangyao Chen
Guangwu Qian
Pengcheng Gao
Xiaoyong Wei
Yaowei Wang
Yonghong Tian
Wen Gao
AI4CE
VLM
24
199
0
20 Feb 2023
STOA-VLP: Spatial-Temporal Modeling of Object and Action for
  Video-Language Pre-training
STOA-VLP: Spatial-Temporal Modeling of Object and Action for Video-Language Pre-training
Weihong Zhong
Mao Zheng
Duyu Tang
Xuan Luo
Heng Gong
Xiaocheng Feng
Bing Qin
25
8
0
20 Feb 2023
Towards Unifying Medical Vision-and-Language Pre-training via Soft
  Prompts
Towards Unifying Medical Vision-and-Language Pre-training via Soft Prompts
Zhihong Chen
Shizhe Diao
Benyou Wang
Guanbin Li
Xiang Wan
MedIm
17
29
0
17 Feb 2023
À-la-carte Prompt Tuning (APT): Combining Distinct Data Via Composable
  Prompting
À-la-carte Prompt Tuning (APT): Combining Distinct Data Via Composable Prompting
Benjamin Bowman
Alessandro Achille
L. Zancato
Matthew Trager
Pramuditha Perera
Giovanni Paolini
Stefano Soatto
VPVLM
13
19
0
15 Feb 2023
PolyFormer: Referring Image Segmentation as Sequential Polygon
  Generation
PolyFormer: Referring Image Segmentation as Sequential Polygon Generation
Jiang Liu
Hui Ding
Zhaowei Cai
Yuting Zhang
R. Satzoda
Vijay Mahadevan
R. Manmatha
ObjD
15
120
0
14 Feb 2023
UniAdapter: Unified Parameter-Efficient Transfer Learning for
  Cross-modal Modeling
UniAdapter: Unified Parameter-Efficient Transfer Learning for Cross-modal Modeling
Haoyu Lu
Yuqi Huo
Guoxing Yang
Zhiwu Lu
Wei Zhan
M. Tomizuka
Mingyu Ding
25
31
0
13 Feb 2023
Paparazzi: A Deep Dive into the Capabilities of Language and Vision
  Models for Grounding Viewpoint Descriptions
Paparazzi: A Deep Dive into the Capabilities of Language and Vision Models for Grounding Viewpoint Descriptions
Henrik Voigt
J. Hombeck
M. Meuschke
K. Lawonn
Sina Zarrieß
VLM
20
1
0
13 Feb 2023
VITR: Augmenting Vision Transformers with Relation-Focused Learning for
  Cross-Modal Information Retrieval
VITR: Augmenting Vision Transformers with Relation-Focused Learning for Cross-Modal Information Retrieval
Yansong Gong
Georgina Cosma
Axel Finke
ViT
23
2
0
13 Feb 2023
Previous
123...171819...222324
Next