Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2310.10942
Cited By
UNK-VQA: A Dataset and a Probe into the Abstention Ability of Multi-modal Large Models
17 October 2023
Yanyang Guo
Fangkai Jiao
Zhiqi Shen
Liqiang Nie
Mohan S. Kankanhalli
MLLM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"UNK-VQA: A Dataset and a Probe into the Abstention Ability of Multi-modal Large Models"
6 / 6 papers shown
Title
EgoPlan-Bench2: A Benchmark for Multimodal Large Language Model Planning in Real-World Scenarios
Lu Qiu
Yuying Ge
Yi Chen
Yixiao Ge
Ying Shan
Xihui Liu
LLMAG
LRM
94
5
0
05 Dec 2024
LMOD: A Large Multimodal Ophthalmology Dataset and Benchmark for Large Vision-Language Models
Zhenyue Qin
Yu Yin
Dylan Campbell
Xuansheng Wu
Ke Zou
Yih-Chung Tham
Ninghao Liu
Xiuzhen Zhang
Qingyu Chen
36
1
0
02 Oct 2024
Task Contamination: Language Models May Not Be Few-Shot Anymore
Changmao Li
Jeffrey Flanigan
92
89
0
26 Dec 2023
Large Language Model Is Not a Good Few-shot Information Extractor, but a Good Reranker for Hard Samples!
Yubo Ma
Yixin Cao
YongChing Hong
Aixin Sun
RALM
80
135
0
15 Mar 2023
BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models
Junnan Li
Dongxu Li
Silvio Savarese
Steven C. H. Hoi
VLM
MLLM
259
4,223
0
30 Jan 2023
BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation
Junnan Li
Dongxu Li
Caiming Xiong
S. Hoi
MLLM
BDL
VLM
CLIP
388
4,110
0
28 Jan 2022
1