Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2309.12109
Cited By
PEFTT: Parameter-Efficient Fine-Tuning for low-resource Tibetan pre-trained language models
21 September 2023
Mingjun Zhou
Daiqing Zhuoma
Qun Nuo
T. Nyima
Re-assign community
ArXiv
PDF
HTML
Papers citing
"PEFTT: Parameter-Efficient Fine-Tuning for low-resource Tibetan pre-trained language models"
6 / 6 papers shown
Title
TiBERT: Tibetan Pre-trained Language Model
Yuan Sun
Sisi Liu
Junjie Deng
Xiaobing Zhao
54
9
0
15 May 2022
SPoT: Better Frozen Model Adaptation through Soft Prompt Transfer
Tu Vu
Brian Lester
Noah Constant
Rami Al-Rfou
Daniel Matthew Cer
VLM
LRM
137
277
0
15 Oct 2021
RAFT: A Real-World Few-Shot Text Classification Benchmark
Neel Alex
Eli Lifland
Lewis Tunstall
A. Thakur
Pegah Maham
...
Carolyn Ashurst
Paul Sedille
A. Carlier
M. Noetel
Andreas Stuhlmuller
RALM
173
56
0
28 Sep 2021
The Power of Scale for Parameter-Efficient Prompt Tuning
Brian Lester
Rami Al-Rfou
Noah Constant
VPVLM
280
3,835
0
18 Apr 2021
Making Pre-trained Language Models Better Few-shot Learners
Tianyu Gao
Adam Fisch
Danqi Chen
241
1,913
0
31 Dec 2020
Exploiting Cloze Questions for Few Shot Text Classification and Natural Language Inference
Timo Schick
Hinrich Schütze
258
1,584
0
21 Jan 2020
1