Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2110.07867
Cited By
Exploring Universal Intrinsic Task Subspace via Prompt Tuning
15 October 2021
Yujia Qin
Xiaozhi Wang
Yusheng Su
Yankai Lin
Ning Ding
Jing Yi
Weize Chen
Zhiyuan Liu
Juanzi Li
Lei Hou
Peng Li
Maosong Sun
Jie Zhou
VLM
VPVLM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Exploring Universal Intrinsic Task Subspace via Prompt Tuning"
8 / 8 papers shown
Title
Paradigm Shift in Natural Language Processing
Tianxiang Sun
Xiangyang Liu
Xipeng Qiu
Xuanjing Huang
64
74
0
26 Sep 2021
CrossFit: A Few-shot Learning Challenge for Cross-task Generalization in NLP
Qinyuan Ye
Bill Yuchen Lin
Xiang Ren
137
167
0
18 Apr 2021
The Power of Scale for Parameter-Efficient Prompt Tuning
Brian Lester
Rami Al-Rfou
Noah Constant
VPVLM
194
2,999
0
18 Apr 2021
WARP: Word-level Adversarial ReProgramming
Karen Hambardzumyan
Hrant Khachatrian
Jonathan May
AAML
165
302
0
01 Jan 2021
Making Pre-trained Language Models Better Few-shot Learners
Tianyu Gao
Adam Fisch
Danqi Chen
166
1,649
0
31 Dec 2020
Scaling Laws for Neural Language Models
Jared Kaplan
Sam McCandlish
T. Henighan
Tom B. Brown
B. Chess
R. Child
Scott Gray
Alec Radford
Jeff Wu
Dario Amodei
147
3,054
0
23 Jan 2020
Exploiting Cloze Questions for Few Shot Text Classification and Natural Language Inference
Timo Schick
Hinrich Schütze
179
1,382
0
21 Jan 2020
Language Models as Knowledge Bases?
Fabio Petroni
Tim Rocktaschel
Patrick Lewis
A. Bakhtin
Yuxiang Wu
Alexander H. Miller
Sebastian Riedel
KELM
285
2,248
0
03 Sep 2019
1