Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2204.00166
Cited By
Making Pre-trained Language Models End-to-end Few-shot Learners with Contrastive Prompt Tuning
1 April 2022
Ziyun Xu
Chengyu Wang
Minghui Qiu
Fuli Luo
Runxin Xu
Songfang Huang
Jun Huang
VLM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Making Pre-trained Language Models End-to-end Few-shot Learners with Contrastive Prompt Tuning"
14 / 14 papers shown
Title
Prompt-Driven Continual Graph Learning
Qi Wang
Tianfei Zhou
Ye Yuan
Rui Mao
CLL
35
0
0
10 Feb 2025
Concentrate Attention: Towards Domain-Generalizable Prompt Optimization for Language Models
Chengzhengxu Li
Xiaoming Liu
Zhaohan Zhang
Yichen Wang
Chen Liu
Y. Lan
Chao Shen
46
2
0
15 Jun 2024
Lifelong Knowledge Editing for LLMs with Retrieval-Augmented Continuous Prompt Learning
Qizhou Chen
Taolin Zhang
Xiaofeng He
Dongyang Li
Chengyu Wang
Longtao Huang
Hui Xue
CLL
KELM
43
10
0
06 May 2024
Heterogeneous Contrastive Learning for Foundation Models and Beyond
Lecheng Zheng
Baoyu Jing
Zihao Li
Hanghang Tong
Jingrui He
VLM
30
19
0
30 Mar 2024
Knowledgeable In-Context Tuning: Exploring and Exploiting Factual Knowledge for In-Context Learning
J. Wang
Chengyu Wang
Chuanqi Tan
Jun Huang
Ming Gao
KELM
26
4
0
26 Sep 2023
Deep Learning for Cross-Domain Few-Shot Visual Recognition: A Survey
Huali Xu
Shuaifeng Zhi
Shuzhou Sun
Vishal M. Patel
Li Liu
27
13
0
15 Mar 2023
A Comprehensive Survey of Few-shot Learning: Evolution, Applications, Challenges, and Opportunities
Yisheng Song
Ting-Yuan Wang
S. Mondal
J. P. Sahoo
SLR
36
342
0
13 May 2022
EasyNLP: A Comprehensive and Easy-to-use Toolkit for Natural Language Processing
Chengyu Wang
Minghui Qiu
Chen Shi
Taolin Zhang
Tingting Liu
Lei Li
J. Wang
Ming Wang
Jun Huang
W. Lin
11
21
0
30 Apr 2022
P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally Across Scales and Tasks
Xiao Liu
Kaixuan Ji
Yicheng Fu
Weng Lam Tam
Zhengxiao Du
Zhilin Yang
Jie Tang
VLM
236
805
0
14 Oct 2021
WARP: Word-level Adversarial ReProgramming
Karen Hambardzumyan
Hrant Khachatrian
Jonathan May
AAML
254
342
0
01 Jan 2021
Making Pre-trained Language Models Better Few-shot Learners
Tianyu Gao
Adam Fisch
Danqi Chen
241
1,916
0
31 Dec 2020
Big Bird: Transformers for Longer Sequences
Manzil Zaheer
Guru Guruganesh
Kumar Avinava Dubey
Joshua Ainslie
Chris Alberti
...
Philip Pham
Anirudh Ravula
Qifan Wang
Li Yang
Amr Ahmed
VLM
251
2,012
0
28 Jul 2020
Pre-trained Models for Natural Language Processing: A Survey
Xipeng Qiu
Tianxiang Sun
Yige Xu
Yunfan Shao
Ning Dai
Xuanjing Huang
LM&MA
VLM
243
1,450
0
18 Mar 2020
Exploiting Cloze Questions for Few Shot Text Classification and Natural Language Inference
Timo Schick
Hinrich Schütze
258
1,587
0
21 Jan 2020
1