Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2407.12277
Cited By
Multimodal Reranking for Knowledge-Intensive Visual Question Answering
17 July 2024
Haoyang Wen
Honglei Zhuang
Hamed Zamani
Alexander Hauptmann
Michael Bendersky
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Multimodal Reranking for Knowledge-Intensive Visual Question Answering"
5 / 5 papers shown
Title
An Empirical Study of GPT-3 for Few-Shot Knowledge-Based VQA
Zhengyuan Yang
Zhe Gan
Jianfeng Wang
Xiaowei Hu
Yumao Lu
Zicheng Liu
Lijuan Wang
169
402
0
10 Sep 2021
WIT: Wikipedia-based Image Text Dataset for Multimodal Multilingual Machine Learning
Krishna Srinivasan
K. Raman
Jiecao Chen
Michael Bendersky
Marc Najork
VLM
197
310
0
02 Mar 2021
Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision
Chao Jia
Yinfei Yang
Ye Xia
Yi-Ting Chen
Zarana Parekh
Hieu H. Pham
Quoc V. Le
Yun-hsuan Sung
Zhen Li
Tom Duerig
VLM
CLIP
293
3,689
0
11 Feb 2021
Rider: Reader-Guided Passage Reranking for Open-Domain Question Answering
Yuning Mao
Pengcheng He
Xiaodong Liu
Yelong Shen
Jianfeng Gao
Jiawei Han
Weizhu Chen
OOD
LRM
134
37
0
01 Jan 2021
Distilling Knowledge from Reader to Retriever for Question Answering
Gautier Izacard
Edouard Grave
RALM
180
251
0
08 Dec 2020
1