Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2210.12050
Cited By
Clip-Tuning: Towards Derivative-free Prompt Learning with a Mixture of Rewards
21 October 2022
Yekun Chai
Shuohuan Wang
Yu Sun
Hao Tian
Hua-Hong Wu
Haifeng Wang
VLM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Clip-Tuning: Towards Derivative-free Prompt Learning with a Mixture of Rewards"
7 / 7 papers shown
Title
Putting People in LLMs' Shoes: Generating Better Answers via Question Rewriter
Junhao Chen
Bowen Wang
Zhouqiang Jiang
Yuta Nakashima
36
1
0
20 Aug 2024
Gradient-Free Textual Inversion
Zhengcong Fei
Mingyuan Fan
Junshi Huang
DiffM
13
31
0
12 Apr 2023
The Power of Scale for Parameter-Efficient Prompt Tuning
Brian Lester
Rami Al-Rfou
Noah Constant
VPVLM
278
3,784
0
18 Apr 2021
Making Pre-trained Language Models Better Few-shot Learners
Tianyu Gao
Adam Fisch
Danqi Chen
241
1,898
0
31 Dec 2020
The Lottery Ticket Hypothesis for Pre-trained BERT Networks
Tianlong Chen
Jonathan Frankle
Shiyu Chang
Sijia Liu
Yang Zhang
Zhangyang Wang
Michael Carbin
148
345
0
23 Jul 2020
Exploiting Cloze Questions for Few Shot Text Classification and Natural Language Inference
Timo Schick
Hinrich Schütze
251
1,584
0
21 Jan 2020
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
294
6,927
0
20 Apr 2018
1