ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2402.01911
  4. Cited By
From PEFT to DEFT: Parameter Efficient Finetuning for Reducing
  Activation Density in Transformers

From PEFT to DEFT: Parameter Efficient Finetuning for Reducing Activation Density in Transformers

2 February 2024
Bharat Runwal
Tejaswini Pedapati
Pin-Yu Chen
    MoE
ArXivPDFHTML

Papers citing "From PEFT to DEFT: Parameter Efficient Finetuning for Reducing Activation Density in Transformers"

4 / 4 papers shown
Title
FDLoRA: Personalized Federated Learning of Large Language Model via Dual
  LoRA Tuning
FDLoRA: Personalized Federated Learning of Large Language Model via Dual LoRA Tuning
Jiaxing Qi
Zhongzhi Luan
Shaohan Huang
Carol J. Fung
Hailong Yang
Depei Qian
19
12
0
12 Jun 2024
Beyond Distillation: Task-level Mixture-of-Experts for Efficient
  Inference
Beyond Distillation: Task-level Mixture-of-Experts for Efficient Inference
Sneha Kudugunta
Yanping Huang
Ankur Bapna
M. Krikun
Dmitry Lepikhin
Minh-Thang Luong
Orhan Firat
MoE
119
87
0
24 Sep 2021
The Power of Scale for Parameter-Efficient Prompt Tuning
The Power of Scale for Parameter-Efficient Prompt Tuning
Brian Lester
Rami Al-Rfou
Noah Constant
VPVLM
275
3,784
0
18 Apr 2021
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language
  Understanding
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
294
6,927
0
20 Apr 2018
1