ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2406.07925
  4. Cited By
FDLoRA: Personalized Federated Learning of Large Language Model via Dual
  LoRA Tuning

FDLoRA: Personalized Federated Learning of Large Language Model via Dual LoRA Tuning

12 June 2024
Jiaxing Qi
Zhongzhi Luan
Shaohan Huang
Carol J. Fung
Hailong Yang
Depei Qian
ArXivPDFHTML

Papers citing "FDLoRA: Personalized Federated Learning of Large Language Model via Dual LoRA Tuning"

14 / 14 papers shown
Title
FHBench: Towards Efficient and Personalized Federated Learning for Multimodal Healthcare
FHBench: Towards Efficient and Personalized Federated Learning for Multimodal Healthcare
Penghao Wang
Qian Chen
Teng Zhang
Y. Zhang
Wang Lu
Yiqiang Chen
18
0
0
15 Apr 2025
Communication-Efficient and Personalized Federated Foundation Model Fine-Tuning via Tri-Matrix Adaptation
Communication-Efficient and Personalized Federated Foundation Model Fine-Tuning via Tri-Matrix Adaptation
Y. Li
Bo Liu
Sheng Huang
Z. Zhang
Xiaotong Yuan
Richang Hong
32
0
0
31 Mar 2025
A Survey on Personalized Alignment -- The Missing Piece for Large Language Models in Real-World Applications
A Survey on Personalized Alignment -- The Missing Piece for Large Language Models in Real-World Applications
Jian-Yu Guan
J. Wu
J. Li
Chuanqi Cheng
Wei Yu Wu
LM&MA
69
0
0
21 Mar 2025
A Survey on Federated Fine-tuning of Large Language Models
A Survey on Federated Fine-tuning of Large Language Models
Yebo Wu
Chunlin Tian
Jingguang Li
He Sun
Kahou Tam
Li Li
Chengzhong Xu
FedML
78
0
0
15 Mar 2025
FedALT: Federated Fine-Tuning through Adaptive Local Training with Rest-of-the-World LoRA
FedALT: Federated Fine-Tuning through Adaptive Local Training with Rest-of-the-World LoRA
Jieming Bian
Lei Wang
Letian Zhang
Jie Xu
44
1
0
14 Mar 2025
A Survey of Personalized Large Language Models: Progress and Future Directions
A Survey of Personalized Large Language Models: Progress and Future Directions
Jiahong Liu
Zexuan Qiu
Zhongyang Li
Quanyu Dai
Jieming Zhu
Minda Hu
Menglin Yang
Irwin King
LM&MA
46
2
0
17 Feb 2025
Personalized Federated Fine-Tuning for LLMs via Data-Driven
  Heterogeneous Model Architectures
Personalized Federated Fine-Tuning for LLMs via Data-Driven Heterogeneous Model Architectures
Yicheng Zhang
Zhen Qin
Zhaomin Wu
Shuiguang Deng
65
2
0
28 Nov 2024
Selective Aggregation for Low-Rank Adaptation in Federated Learning
Selective Aggregation for Low-Rank Adaptation in Federated Learning
Pengxin Guo
Shuang Zeng
Y. Wang
Huijie Fan
Feifei Wang
Liangqiong Qu
FedML
24
8
0
02 Oct 2024
RBLA: Rank-Based-LoRA-Aggregation for Fine-tuning Heterogeneous Models
  in FLaaS
RBLA: Rank-Based-LoRA-Aggregation for Fine-tuning Heterogeneous Models in FLaaS
Shuaijun Chen
Omid Tavallaie
Niousha Nazemi
Albert Y. Zomaya
21
3
0
16 Aug 2024
Pre-Training and Personalized Fine-Tuning via Over-the-Air Federated
  Meta-Learning: Convergence-Generalization Trade-Offs
Pre-Training and Personalized Fine-Tuning via Over-the-Air Federated Meta-Learning: Convergence-Generalization Trade-Offs
Haifeng Wen
Hong Xing
Osvaldo Simeone
AI4CE
27
3
0
17 Jun 2024
From PEFT to DEFT: Parameter Efficient Finetuning for Reducing
  Activation Density in Transformers
From PEFT to DEFT: Parameter Efficient Finetuning for Reducing Activation Density in Transformers
Bharat Runwal
Tejaswini Pedapati
Pin-Yu Chen
MoE
47
4
0
02 Feb 2024
GLM-130B: An Open Bilingual Pre-trained Model
GLM-130B: An Open Bilingual Pre-trained Model
Aohan Zeng
Xiao Liu
Zhengxiao Du
Zihan Wang
Hanyu Lai
...
Jidong Zhai
Wenguang Chen
Peng-Zhen Zhang
Yuxiao Dong
Jie Tang
BDL
LRM
240
1,070
0
05 Oct 2022
ATTEMPT: Parameter-Efficient Multi-task Tuning via Attentional Mixtures
  of Soft Prompts
ATTEMPT: Parameter-Efficient Multi-task Tuning via Attentional Mixtures of Soft Prompts
Akari Asai
Mohammadreza Salehi
Matthew E. Peters
Hannaneh Hajishirzi
118
98
0
24 May 2022
P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally
  Across Scales and Tasks
P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally Across Scales and Tasks
Xiao Liu
Kaixuan Ji
Yicheng Fu
Weng Lam Tam
Zhengxiao Du
Zhilin Yang
Jie Tang
VLM
234
780
0
14 Oct 2021
1