Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2110.08484
Cited By
A Good Prompt Is Worth Millions of Parameters: Low-resource Prompt-based Learning for Vision-Language Models
16 October 2021
Woojeong Jin
Yu Cheng
Yelong Shen
Weizhu Chen
Xiang Ren
VLM
VPVLM
MLLM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"A Good Prompt Is Worth Millions of Parameters: Low-resource Prompt-based Learning for Vision-Language Models"
22 / 22 papers shown
Title
Visual Adaptive Prompting for Compositional Zero-Shot Learning
Kyle Stein
A. Mahyari
Guillermo A. Francia
Eman El-Sheikh
VLM
CoGe
138
1
0
27 Feb 2025
Large Multimodal Models for Low-Resource Languages: A Survey
Marian Lupascu
Ana-Cristina Rogoz
Mihai-Sorin Stupariu
Radu Tudor Ionescu
51
1
0
08 Feb 2025
Exploring the Use of Contrastive Language-Image Pre-Training for Human Posture Classification: Insights from Yoga Pose Analysis
Andrzej D. Dobrzycki
Ana M. Bernardos
Luca Bergesio
Andrzej Pomirski
Daniel Sáez-Trigueros
3DH
31
3
0
13 Jan 2025
A RAG Approach for Generating Competency Questions in Ontology Engineering
Xueli Pan
Jacco van Ossenbruggen
Victor de Boer
Zhisheng Huang
21
0
0
13 Sep 2024
Paraphrase and Aggregate with Large Language Models for Minimizing Intent Classification Errors
Vikas Yadav
Zheng Tang
Vijay Srinivasan
19
8
0
24 Jun 2024
Enhancing Vision-Language Pre-training with Rich Supervisions
Yuan Gao
Kunyu Shi
Pengkai Zhu
Edouard Belval
Oren Nuriel
Srikar Appalaraju
Shabnam Ghadar
Vijay Mahadevan
Zhuowen Tu
Stefano Soatto
VLM
CLIP
62
11
0
05 Mar 2024
Improving Zero-shot Visual Question Answering via Large Language Models with Reasoning Question Prompts
Yunshi Lan
Xiang Li
Xin Liu
Yang Li
Wei Qin
Weining Qian
LRM
ReLM
17
23
0
15 Nov 2023
VLIS: Unimodal Language Models Guide Multimodal Language Generation
Jiwan Chung
Youngjae Yu
VLM
22
1
0
15 Oct 2023
Tackling VQA with Pretrained Foundation Models without Further Training
Alvin De Jun Tan
Bingquan Shen
MLLM
10
1
0
27 Sep 2023
Investigating Prompting Techniques for Zero- and Few-Shot Visual Question Answering
Rabiul Awal
Le Zhang
Aishwarya Agrawal
LRM
38
12
0
16 Jun 2023
Modularized Zero-shot VQA with Pre-trained Models
Rui Cao
Jing Jiang
LRM
13
2
0
27 May 2023
The Contribution of Knowledge in Visiolinguistic Learning: A Survey on Tasks and Challenges
Maria Lymperaiou
Giorgos Stamou
VLM
18
4
0
04 Mar 2023
See, Think, Confirm: Interactive Prompting Between Vision and Language Models for Knowledge-based Visual Reasoning
Zhenfang Chen
Qinhong Zhou
Yikang Shen
Yining Hong
Hao Zhang
Chuang Gan
LRM
VLM
29
35
0
12 Jan 2023
TabLLM: Few-shot Classification of Tabular Data with Large Language Models
S. Hegselmann
Alejandro Buendia
Hunter Lang
Monica Agrawal
Xiaoyi Jiang
David Sontag
LMTD
27
208
0
19 Oct 2022
Prompt-based Learning for Unpaired Image Captioning
Peipei Zhu
Xiao Wang
Lin Zhu
Zhenglong Sun
Weishi Zheng
Yaowei Wang
C. L. P. Chen
VLM
19
31
0
26 May 2022
CoCa: Contrastive Captioners are Image-Text Foundation Models
Jiahui Yu
Zirui Wang
Vijay Vasudevan
Legg Yeung
Mojtaba Seyedhosseini
Yonghui Wu
VLM
CLIP
OffRL
21
1,249
0
04 May 2022
PSP: Pre-trained Soft Prompts for Few-Shot Abstractive Summarization
Xiaochen Liu
Yang Gao
Yu Bai
Jiawei Li
Yinan Hu
Yang Gao
Boxing Chen
32
22
0
09 Apr 2022
An Empirical Study of GPT-3 for Few-Shot Knowledge-Based VQA
Zhengyuan Yang
Zhe Gan
Jianfeng Wang
Xiaowei Hu
Yumao Lu
Zicheng Liu
Lijuan Wang
169
401
0
10 Sep 2021
Unifying Vision-and-Language Tasks via Text Generation
Jaemin Cho
Jie Lei
Hao Tan
Mohit Bansal
MLLM
249
518
0
04 Feb 2021
Making Pre-trained Language Models Better Few-shot Learners
Tianyu Gao
Adam Fisch
Danqi Chen
241
1,898
0
31 Dec 2020
Exploiting Cloze Questions for Few Shot Text Classification and Natural Language Inference
Timo Schick
Hinrich Schütze
251
1,584
0
21 Jan 2020
Unified Vision-Language Pre-Training for Image Captioning and VQA
Luowei Zhou
Hamid Palangi
Lei Zhang
Houdong Hu
Jason J. Corso
Jianfeng Gao
MLLM
VLM
250
922
0
24 Sep 2019
1