Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2402.01911
Cited By
From PEFT to DEFT: Parameter Efficient Finetuning for Reducing Activation Density in Transformers
2 February 2024
Bharat Runwal
Tejaswini Pedapati
Pin-Yu Chen
MoE
Re-assign community
ArXiv
PDF
HTML
Papers citing
"From PEFT to DEFT: Parameter Efficient Finetuning for Reducing Activation Density in Transformers"
4 / 4 papers shown
Title
FDLoRA: Personalized Federated Learning of Large Language Model via Dual LoRA Tuning
Jiaxing Qi
Zhongzhi Luan
Shaohan Huang
Carol J. Fung
Hailong Yang
Depei Qian
19
12
0
12 Jun 2024
Beyond Distillation: Task-level Mixture-of-Experts for Efficient Inference
Sneha Kudugunta
Yanping Huang
Ankur Bapna
M. Krikun
Dmitry Lepikhin
Minh-Thang Luong
Orhan Firat
MoE
119
87
0
24 Sep 2021
The Power of Scale for Parameter-Efficient Prompt Tuning
Brian Lester
Rami Al-Rfou
Noah Constant
VPVLM
275
3,784
0
18 Apr 2021
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
294
6,927
0
20 Apr 2018
1