Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2303.15647
Cited By
Scaling Down to Scale Up: A Guide to Parameter-Efficient Fine-Tuning
28 March 2023
Vladislav Lialin
Vijeta Deshpande
Anna Rumshisky
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Scaling Down to Scale Up: A Guide to Parameter-Efficient Fine-Tuning"
7 / 107 papers shown
Title
Multitask Prompted Training Enables Zero-Shot Task Generalization
Victor Sanh
Albert Webson
Colin Raffel
Stephen H. Bach
Lintang Sutawika
...
T. Bers
Stella Biderman
Leo Gao
Thomas Wolf
Alexander M. Rush
LRM
211
1,656
0
15 Oct 2021
SPoT: Better Frozen Model Adaptation through Soft Prompt Transfer
Tu Vu
Brian Lester
Noah Constant
Rami Al-Rfou
Daniel Matthew Cer
VLM
LRM
137
277
0
15 Oct 2021
Exploring Universal Intrinsic Task Subspace via Prompt Tuning
Yujia Qin
Xiaozhi Wang
Yusheng Su
Yankai Lin
Ning Ding
...
Juanzi Li
Lei Hou
Peng Li
Maosong Sun
Jie Zhou
VLM
VPVLM
103
26
0
15 Oct 2021
The Power of Scale for Parameter-Efficient Prompt Tuning
Brian Lester
Rami Al-Rfou
Noah Constant
VPVLM
280
3,843
0
18 Apr 2021
Beyond Fully-Connected Layers with Quaternions: Parameterization of Hypercomplex Multiplications with
1
/
n
1/n
1/
n
Parameters
Aston Zhang
Yi Tay
Shuai Zhang
Alvin Chan
A. Luu
S. Hui
Jie Fu
MQ
166
83
0
17 Feb 2021
WARP: Word-level Adversarial ReProgramming
Karen Hambardzumyan
Hrant Khachatrian
Jonathan May
AAML
254
342
0
01 Jan 2021
Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism
M. Shoeybi
M. Patwary
Raul Puri
P. LeGresley
Jared Casper
Bryan Catanzaro
MoE
243
1,817
0
17 Sep 2019
Previous
1
2
3