Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2310.10543
Cited By
ViPE: Visualise Pretty-much Everything
16 October 2023
Hassan Shahmohammadi
Adhiraj Ghosh
Hendrik P. A. Lensch
DiffM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"ViPE: Visualise Pretty-much Everything"
5 / 5 papers shown
Title
No "Zero-Shot" Without Exponential Data: Pretraining Concept Frequency Determines Multimodal Model Performance
Vishaal Udandarao
Ameya Prabhu
Adhiraj Ghosh
Yash Sharma
Philip H. S. Torr
Adel Bibi
Samuel Albanie
Matthias Bethge
VLM
109
43
0
04 Apr 2024
Text-Only Training for Image Captioning using Noise-Injected CLIP
David Nukrai
Ron Mokady
Amir Globerson
VLM
CLIP
41
69
0
01 Nov 2022
FLUTE: Figurative Language Understanding through Textual Explanations
Tuhin Chakrabarty
Arkadiy Saakyan
Debanjan Ghosh
Smaranda Muresan
42
65
0
24 May 2022
BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation
Junnan Li
Dongxu Li
Caiming Xiong
S. Hoi
MLLM
BDL
VLM
CLIP
382
4,010
0
28 Jan 2022
Towards Zero-Label Language Learning
Zirui Wang
Adams Wei Yu
Orhan Firat
Yuan Cao
SyDa
167
101
0
19 Sep 2021
1