ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1908.02265
  4. Cited By
ViLBERT: Pretraining Task-Agnostic Visiolinguistic Representations for
  Vision-and-Language Tasks

ViLBERT: Pretraining Task-Agnostic Visiolinguistic Representations for Vision-and-Language Tasks

6 August 2019
Jiasen Lu
Dhruv Batra
Devi Parikh
Stefan Lee
    SSL
    VLM
ArXivPDFHTML

Papers citing "ViLBERT: Pretraining Task-Agnostic Visiolinguistic Representations for Vision-and-Language Tasks"

50 / 2,088 papers shown
Title
Bridging Vision and Language Spaces with Assignment Prediction
Bridging Vision and Language Spaces with Assignment Prediction
Jungin Park
Jiyoung Lee
Kwanghoon Sohn
VLM
29
6
0
15 Apr 2024
Multimodal Cross-Document Event Coreference Resolution Using Linear
  Semantic Transfer and Mixed-Modality Ensembles
Multimodal Cross-Document Event Coreference Resolution Using Linear Semantic Transfer and Mixed-Modality Ensembles
Abhijnan Nath
Huma Jamil
Shafiuddin Rehan Ahmed
George Baker
Rahul Ghosh
James H. Martin
Nathaniel Blanchard
Nikhil Krishnaswamy
32
2
0
13 Apr 2024
Calibration & Reconstruction: Deep Integrated Language for Referring
  Image Segmentation
Calibration & Reconstruction: Deep Integrated Language for Referring Image Segmentation
Yichen Yan
Xingjian He
Sihan Chen
Jing Liu
ObjD
31
1
0
12 Apr 2024
FLoRA: Enhancing Vision-Language Models with Parameter-Efficient
  Federated Learning
FLoRA: Enhancing Vision-Language Models with Parameter-Efficient Federated Learning
Duy Phuong Nguyen
J. P. Muñoz
Ali Jannesari
VLM
29
6
0
12 Apr 2024
Connecting NeRFs, Images, and Text
Connecting NeRFs, Images, and Text
Francesco Ballerini
Pierluigi Zama Ramirez
Roberto Mirabella
Samuele Salti
Luigi Di Stefano
44
4
0
11 Apr 2024
MedRG: Medical Report Grounding with Multi-modal Large Language Model
MedRG: Medical Report Grounding with Multi-modal Large Language Model
K. Zou
Yang Bai
Zhihao Chen
Yang Zhou
Yidi Chen
Kai Ren
Meng Wang
Xuedong Yuan
Xiaojing Shen
Huazhu Fu
MedIm
42
3
0
10 Apr 2024
Unified Multi-modal Diagnostic Framework with Reconstruction
  Pre-training and Heterogeneity-combat Tuning
Unified Multi-modal Diagnostic Framework with Reconstruction Pre-training and Heterogeneity-combat Tuning
Yupei Zhang
Li Pan
Qiushi Yang
Tan Li
Zhen Chen
26
1
0
09 Apr 2024
Shortcut-connected Expert Parallelism for Accelerating
  Mixture-of-Experts
Shortcut-connected Expert Parallelism for Accelerating Mixture-of-Experts
Weilin Cai
Juyong Jiang
Le Qin
Junwei Cui
Sunghun Kim
Jiayi Huang
50
7
0
07 Apr 2024
Contextual Chart Generation for Cyber Deception
Contextual Chart Generation for Cyber Deception
David D. Nguyen
David Liebowitz
Surya Nepal
S. Kanhere
Sharif Abuadbba
41
0
0
07 Apr 2024
Vision Transformers in Domain Adaptation and Generalization: A Study of
  Robustness
Vision Transformers in Domain Adaptation and Generalization: A Study of Robustness
Shadi Alijani
Jamil Fayyad
H. Najjaran
OOD
27
1
0
05 Apr 2024
CoMat: Aligning Text-to-Image Diffusion Model with Image-to-Text Concept
  Matching
CoMat: Aligning Text-to-Image Diffusion Model with Image-to-Text Concept Matching
Dongzhi Jiang
Guanglu Song
Xiaoshi Wu
Renrui Zhang
Dazhong Shen
Zhuofan Zong
Yu Liu
Hongsheng Li
VLM
30
20
0
04 Apr 2024
DeViDe: Faceted medical knowledge for improved medical vision-language
  pre-training
DeViDe: Faceted medical knowledge for improved medical vision-language pre-training
Haozhe Luo
Ziyu Zhou
Corentin Royer
Anjany Sekuboyina
Bjoern H. Menze
VLM
ViT
MedIm
40
7
0
04 Apr 2024
Is CLIP the main roadblock for fine-grained open-world perception?
Is CLIP the main roadblock for fine-grained open-world perception?
Lorenzo Bianchi
F. Carrara
Nicola Messina
Fabrizio Falchi
VLM
30
4
0
04 Apr 2024
Cross-Modality Gait Recognition: Bridging LiDAR and Camera Modalities
  for Human Identification
Cross-Modality Gait Recognition: Bridging LiDAR and Camera Modalities for Human Identification
Rui Wang
Chuanfu Shen
M. Marín-Jiménez
George Q. Huang
Shiqi Yu
CVBM
45
4
0
04 Apr 2024
Diverse and Tailored Image Generation for Zero-shot Multi-label
  Classification
Diverse and Tailored Image Generation for Zero-shot Multi-label Classification
Kai Zhang
Zhixiang Yuan
Tao Huang
VLM
29
4
0
04 Apr 2024
3DStyleGLIP: Part-Tailored Text-Guided 3D Neural Stylization
3DStyleGLIP: Part-Tailored Text-Guided 3D Neural Stylization
Seung-bum Chung
Joohyun Park
Hyewon Kan
Hyeongyeop Kang
CLIP
29
1
0
03 Apr 2024
DELAN: Dual-Level Alignment for Vision-and-Language Navigation by
  Cross-Modal Contrastive Learning
DELAN: Dual-Level Alignment for Vision-and-Language Navigation by Cross-Modal Contrastive Learning
Mengfei Du
Binhao Wu
Jiwen Zhang
Zhihao Fan
Zejun Li
Ruipu Luo
Xuanjing Huang
Zhongyu Wei
33
3
0
02 Apr 2024
Dialogue with Robots: Proposals for Broadening Participation and
  Research in the SLIVAR Community
Dialogue with Robots: Proposals for Broadening Participation and Research in the SLIVAR Community
Casey Kennington
Malihe Alikhani
Heather Pon-Barry
Katherine Atwell
Yonatan Bisk
...
Jivko Sinapov
Angela Stewart
Matthew Stone
Stefanie Tellex
Tom Williams
49
0
0
01 Apr 2024
SyncMask: Synchronized Attentional Masking for Fashion-centric
  Vision-Language Pretraining
SyncMask: Synchronized Attentional Masking for Fashion-centric Vision-Language Pretraining
Chull Hwan Song
Taebaek Hwang
Jooyoung Yoon
Shunghyun Choi
Yeong Hyeon Gu
21
4
0
01 Apr 2024
Learn "No" to Say "Yes" Better: Improving Vision-Language Models via
  Negations
Learn "No" to Say "Yes" Better: Improving Vision-Language Models via Negations
Jaisidh Singh
Ishaan Shrivastava
Mayank Vatsa
Richa Singh
Aparna Bharati
VLM
CoGe
24
14
0
29 Mar 2024
FSMR: A Feature Swapping Multi-modal Reasoning Approach with Joint
  Textual and Visual Clues
FSMR: A Feature Swapping Multi-modal Reasoning Approach with Joint Textual and Visual Clues
Shuang Li
Jiahua Wang
Lijie Wen
LRM
21
0
0
29 Mar 2024
Draw-and-Understand: Leveraging Visual Prompts to Enable MLLMs to Comprehend What You Want
Draw-and-Understand: Leveraging Visual Prompts to Enable MLLMs to Comprehend What You Want
Weifeng Lin
Xinyu Wei
Ruichuan An
Peng Gao
Bocheng Zou
Yulin Luo
Siyuan Huang
Shanghang Zhang
Hongsheng Li
VLM
61
33
0
29 Mar 2024
Text Data-Centric Image Captioning with Interactive Prompts
Text Data-Centric Image Captioning with Interactive Prompts
Yiyu Wang
Hao Luo
Jungang Xu
Yingfei Sun
Fan Wang
VLM
30
0
0
28 Mar 2024
Generative Multi-modal Models are Good Class-Incremental Learners
Generative Multi-modal Models are Good Class-Incremental Learners
Xusheng Cao
Haori Lu
Linlan Huang
Xialei Liu
Ming-Ming Cheng
CLL
41
10
0
27 Mar 2024
m3P: Towards Multimodal Multilingual Translation with Multimodal Prompt
m3P: Towards Multimodal Multilingual Translation with Multimodal Prompt
Jian Yang
Hongcheng Guo
Yuwei Yin
Jiaqi Bai
Bing Wang
Jiaheng Liu
Xinnian Liang
Linzheng Cahi
Liqun Yang
Zhoujun Li
33
9
0
26 Mar 2024
UrbanVLP: Multi-Granularity Vision-Language Pretraining for Urban Socioeconomic Indicator Prediction
UrbanVLP: Multi-Granularity Vision-Language Pretraining for Urban Socioeconomic Indicator Prediction
Xixuan Hao
Wei Chen
Yibo Yan
Siru Zhong
Kun Wang
Qingsong Wen
Yuxuan Liang
VLM
74
0
0
25 Mar 2024
Temporal-Spatial Object Relations Modeling for Vision-and-Language
  Navigation
Temporal-Spatial Object Relations Modeling for Vision-and-Language Navigation
Bowen Huang
Yanwei Zheng
Chuanlin Lan
Xinpeng Zhao
Yifei Zou
Dongxiao Yu
31
0
0
23 Mar 2024
Not All Attention is Needed: Parameter and Computation Efficient
  Transfer Learning for Multi-modal Large Language Models
Not All Attention is Needed: Parameter and Computation Efficient Transfer Learning for Multi-modal Large Language Models
Qiong Wu
Weihao Ye
Yiyi Zhou
Xiaoshuai Sun
Rongrong Ji
MoE
38
1
0
22 Mar 2024
As Firm As Their Foundations: Can open-sourced foundation models be used
  to create adversarial examples for downstream tasks?
As Firm As Their Foundations: Can open-sourced foundation models be used to create adversarial examples for downstream tasks?
Anjun Hu
Jindong Gu
Francesco Pinto
Konstantinos Kamnitsas
Philip H. S. Torr
AAML
SILM
32
5
0
19 Mar 2024
CLIP-VIS: Adapting CLIP for Open-Vocabulary Video Instance Segmentation
CLIP-VIS: Adapting CLIP for Open-Vocabulary Video Instance Segmentation
Wenqi Zhu
Jiale Cao
Jin Xie
Shuangming Yang
Yanwei Pang
VLM
CLIP
37
2
0
19 Mar 2024
Eye-gaze Guided Multi-modal Alignment for Medical Representation
  Learning
Eye-gaze Guided Multi-modal Alignment for Medical Representation Learning
Chong Ma
Hanqi Jiang
Wenting Chen
Yiwei Li
Zihao Wu
...
Dajiang Zhu
Tuo Zhang
Dinggang Shen
Tianming Liu
Xiang Li
21
0
0
19 Mar 2024
Hierarchical Spatial Proximity Reasoning for Vision-and-Language
  Navigation
Hierarchical Spatial Proximity Reasoning for Vision-and-Language Navigation
Ming Xu
Zilong Xie
27
2
0
18 Mar 2024
Semantic-Enhanced Representation Learning for Road Networks with
  Temporal Dynamics
Semantic-Enhanced Representation Learning for Road Networks with Temporal Dynamics
Yile Chen
Xiucheng Li
Gao Cong
Zhifeng Bao
Cheng Long
16
2
0
18 Mar 2024
Can LLMs Generate Human-Like Wayfinding Instructions? Towards
  Platform-Agnostic Embodied Instruction Synthesis
Can LLMs Generate Human-Like Wayfinding Instructions? Towards Platform-Agnostic Embodied Instruction Synthesis
Vishnu Sashank Dorbala
Sanjoy Chowdhury
Dinesh Manocha
LM&Ro
25
0
0
18 Mar 2024
Mixture-of-Prompt-Experts for Multi-modal Semantic Understanding
Mixture-of-Prompt-Experts for Multi-modal Semantic Understanding
Zichen Wu
Hsiu-Yuan Huang
Fanyi Qu
Yunfang Wu
VLM
MoE
24
3
0
17 Mar 2024
Deciphering Hate: Identifying Hateful Memes and Their Targets
Deciphering Hate: Identifying Hateful Memes and Their Targets
E. Hossain
Omar Sharif
M. M. Hoque
S. Preum
44
4
0
16 Mar 2024
Joint Multimodal Transformer for Emotion Recognition in the Wild
Joint Multimodal Transformer for Emotion Recognition in the Wild
Paul Waligora
Haseeb Aslam
Osama Zeeshan
Soufiane Belharbi
A. L. Koerich
M. Pedersoli
Simon L Bacon
Eric Granger
32
6
0
15 Mar 2024
Knowledge Condensation and Reasoning for Knowledge-based VQA
Knowledge Condensation and Reasoning for Knowledge-based VQA
Dongze Hao
Jian Jia
Longteng Guo
Qunbo Wang
Te Yang
...
Yanhua Cheng
Bo Wang
Quan Chen
Han Li
Jing Liu
29
1
0
15 Mar 2024
GET: Unlocking the Multi-modal Potential of CLIP for Generalized Category Discovery
GET: Unlocking the Multi-modal Potential of CLIP for Generalized Category Discovery
Enguang Wang
Zhimao Peng
Zhengyuan Xie
Fei Yang
Xialei Liu
Ming-Ming Cheng
54
3
0
15 Mar 2024
PosSAM: Panoptic Open-vocabulary Segment Anything
PosSAM: Panoptic Open-vocabulary Segment Anything
VS Vibashan
Shubhankar Borse
Hyojin Park
Debasmit Das
Vishal M. Patel
Munawar Hayat
Fatih Porikli
VLM
MLLM
36
6
0
14 Mar 2024
MM1: Methods, Analysis & Insights from Multimodal LLM Pre-training
MM1: Methods, Analysis & Insights from Multimodal LLM Pre-training
Brandon McKinzie
Zhe Gan
J. Fauconnier
Sam Dodge
Bowen Zhang
...
Zirui Wang
Ruoming Pang
Peter Grasch
Alexander Toshev
Yinfei Yang
MLLM
32
186
0
14 Mar 2024
Efficient Prompt Tuning of Large Vision-Language Model for Fine-Grained
  Ship Classification
Efficient Prompt Tuning of Large Vision-Language Model for Fine-Grained Ship Classification
Long Lan
Fengxiang Wang
Shuyan Li
Xiangtao Zheng
Zengmao Wang
Xinwang Liu
VLM
26
7
0
13 Mar 2024
VideoMamba: State Space Model for Efficient Video Understanding
VideoMamba: State Space Model for Efficient Video Understanding
Kunchang Li
Xinhao Li
Yi Wang
Yinan He
Yali Wang
Limin Wang
Yu Qiao
Mamba
35
179
0
11 Mar 2024
Multi-modal Semantic Understanding with Contrastive Cross-modal Feature
  Alignment
Multi-modal Semantic Understanding with Contrastive Cross-modal Feature Alignment
Minghua Zhang
Ke Chang
Yunfang Wu
25
1
0
11 Mar 2024
Towards Deviation-Robust Agent Navigation via Perturbation-Aware
  Contrastive Learning
Towards Deviation-Robust Agent Navigation via Perturbation-Aware Contrastive Learning
Bingqian Lin
Yanxin Long
Yi Zhu
Fengda Zhu
Xiaodan Liang
QiXiang Ye
Liang Lin
27
5
0
09 Mar 2024
Effectiveness Assessment of Recent Large Vision-Language Models
Effectiveness Assessment of Recent Large Vision-Language Models
Yao Jiang
Xinyu Yan
Ge-Peng Ji
Keren Fu
Meijun Sun
Huan Xiong
Deng-Ping Fan
Fahad Shahbaz Khan
27
14
0
07 Mar 2024
Transformers and Language Models in Form Understanding: A Comprehensive
  Review of Scanned Document Analysis
Transformers and Language Models in Form Understanding: A Comprehensive Review of Scanned Document Analysis
Abdelrahman Abdallah
Daniel Eberharter
Zoe Pfister
Adam Jatowt
27
12
0
06 Mar 2024
Temporal Cross-Attention for Dynamic Embedding and Tokenization of
  Multimodal Electronic Health Records
Temporal Cross-Attention for Dynamic Embedding and Tokenization of Multimodal Electronic Health Records
Yingbo Ma
Suraj Kolla
Dhruv Kaliraman
Victoria Nolan
Zhenhong Hu
...
T. Ozrazgat-Baslanti
Tyler J. Loftus
Parisa Rashidi
A. Bihorac
B. Shickel
AI4TS
27
1
0
06 Mar 2024
FAR: Flexible, Accurate and Robust 6DoF Relative Camera Pose Estimation
FAR: Flexible, Accurate and Robust 6DoF Relative Camera Pose Estimation
C. Rockwell
Nilesh Kulkarni
Linyi Jin
Jeong Joon Park
Justin Johnson
David Fouhey
49
6
0
05 Mar 2024
The Case for Evaluating Multimodal Translation Models on Text Datasets
The Case for Evaluating Multimodal Translation Models on Text Datasets
Vipin Vijayan
Braeden Bowen
Scott Grigsby
Timothy Anderson
Jeremy Gwinnup
33
3
0
05 Mar 2024
Previous
123...678...404142
Next