Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2304.14999
Cited By
Empirical Analysis of the Strengths and Weaknesses of PEFT Techniques for LLMs
28 April 2023
George Pu
Anirudh Jain
Jihan Yin
Russell Kaplan
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Empirical Analysis of the Strengths and Weaknesses of PEFT Techniques for LLMs"
5 / 5 papers shown
Title
ExPLoRA: Parameter-Efficient Extended Pre-Training to Adapt Vision Transformers under Domain Shifts
Samar Khanna
Medhanie Irgau
David B. Lobell
Stefano Ermon
VLM
30
4
0
16 Jun 2024
Mixed Text Recognition with Efficient Parameter Fine-Tuning and Transformer
Da Chang
Yu Li
64
2
0
19 Apr 2024
Federated Full-Parameter Tuning of Billion-Sized Language Models with Communication Cost under 18 Kilobytes
Zhen Qin
Daoyuan Chen
Bingchen Qian
Bolin Ding
Yaliang Li
Shuiguang Deng
FedML
32
32
0
11 Dec 2023
Multitask Prompted Training Enables Zero-Shot Task Generalization
Victor Sanh
Albert Webson
Colin Raffel
Stephen H. Bach
Lintang Sutawika
...
T. Bers
Stella Biderman
Leo Gao
Thomas Wolf
Alexander M. Rush
LRM
211
1,656
0
15 Oct 2021
The Power of Scale for Parameter-Efficient Prompt Tuning
Brian Lester
Rami Al-Rfou
Noah Constant
VPVLM
280
3,843
0
18 Apr 2021
1