ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2308.06522
  4. Cited By
SLoRA: Federated Parameter Efficient Fine-Tuning of Language Models

SLoRA: Federated Parameter Efficient Fine-Tuning of Language Models

12 August 2023
Sara Babakniya
A. Elkordy
Yahya H. Ezzeldin
Qingfeng Liu
Kee-Bong Song
Mostafa El-Khamy
Salman Avestimehr
ArXivPDFHTML

Papers citing "SLoRA: Federated Parameter Efficient Fine-Tuning of Language Models"

16 / 16 papers shown
Title
Communication-Efficient Federated Fine-Tuning of Language Models via Dynamic Update Schedules
Communication-Efficient Federated Fine-Tuning of Language Models via Dynamic Update Schedules
Michail Theologitis
V. Samoladas
Antonios Deligiannakis
20
0
0
07 May 2025
Federated Adapter on Foundation Models: An Out-Of-Distribution Approach
Federated Adapter on Foundation Models: An Out-Of-Distribution Approach
Yiyuan Yang
Guodong Long
Tianyi Zhou
Qinghua Lu
Shanshan Ye
Jing Jiang
OODD
72
1
0
02 May 2025
Communication-Efficient Wireless Federated Fine-Tuning for Large-Scale AI Models
Communication-Efficient Wireless Federated Fine-Tuning for Large-Scale AI Models
Bumjun Kim
Wan Choi
26
0
0
01 May 2025
Fed-SB: A Silver Bullet for Extreme Communication Efficiency and Performance in (Private) Federated LoRA Fine-Tuning
Fed-SB: A Silver Bullet for Extreme Communication Efficiency and Performance in (Private) Federated LoRA Fine-Tuning
Raghav Singhal
Kaustubh Ponkshe
Rohit Vartak
Lav R. Varshney
Praneeth Vepakomma
FedML
71
0
0
24 Feb 2025
Decentralized Low-Rank Fine-Tuning of Large Language Models
Sajjad Ghiasvand
Mahnoosh Alizadeh
Ramtin Pedarsani
ALM
64
0
0
26 Jan 2025
Aggregating Low Rank Adapters in Federated Fine-tuning
Aggregating Low Rank Adapters in Federated Fine-tuning
Evelyn Trautmann
Ian Hales
Martin F. Volk
AI4CE
FedML
34
0
0
10 Jan 2025
Efficient Federated Finetuning of Tiny Transformers with Resource-Constrained Devices
Efficient Federated Finetuning of Tiny Transformers with Resource-Constrained Devices
Kilian Pfeiffer
Mohamed Aboelenien Ahmed
R. Khalili
J. Henkel
25
0
0
12 Nov 2024
Mobile Edge Intelligence for Large Language Models: A Contemporary Survey
Mobile Edge Intelligence for Large Language Models: A Contemporary Survey
Guanqiao Qu
Qiyuan Chen
Wei Wei
Zheng Lin
Xianhao Chen
Kaibin Huang
31
37
0
09 Jul 2024
DAGER: Exact Gradient Inversion for Large Language Models
DAGER: Exact Gradient Inversion for Large Language Models
Ivo Petrov
Dimitar I. Dimitrov
Maximilian Baader
Mark Niklas Muller
Martin Vechev
FedML
41
2
0
24 May 2024
FeDeRA:Efficient Fine-tuning of Language Models in Federated Learning
  Leveraging Weight Decomposition
FeDeRA:Efficient Fine-tuning of Language Models in Federated Learning Leveraging Weight Decomposition
Yuxuan Yan
Qianqian Yang
Shunpu Tang
Zhiguo Shi
22
13
0
29 Apr 2024
Federated Learning Priorities Under the European Union Artificial
  Intelligence Act
Federated Learning Priorities Under the European Union Artificial Intelligence Act
Herbert Woisetschläger
Alexander Erben
Bill Marino
Shiqiang Wang
Nicholas D. Lane
R. Mayer
Hans-Arno Jacobsen
16
15
0
05 Feb 2024
Federated Full-Parameter Tuning of Billion-Sized Language Models with
  Communication Cost under 18 Kilobytes
Federated Full-Parameter Tuning of Billion-Sized Language Models with Communication Cost under 18 Kilobytes
Zhen Qin
Daoyuan Chen
Bingchen Qian
Bolin Ding
Yaliang Li
Shuiguang Deng
FedML
32
30
0
11 Dec 2023
Sweeping Heterogeneity with Smart MoPs: Mixture of Prompts for LLM Task Adaptation
Sweeping Heterogeneity with Smart MoPs: Mixture of Prompts for LLM Task Adaptation
Chen Dun
Mirian Hipolito Garcia
Guoqing Zheng
Ahmed Hassan Awadallah
Anastasios Kyrillidis
Robert Sim
74
6
0
04 Oct 2023
The Power of Scale for Parameter-Efficient Prompt Tuning
The Power of Scale for Parameter-Efficient Prompt Tuning
Brian Lester
Rami Al-Rfou
Noah Constant
VPVLM
278
3,784
0
18 Apr 2021
FjORD: Fair and Accurate Federated Learning under heterogeneous targets
  with Ordered Dropout
FjORD: Fair and Accurate Federated Learning under heterogeneous targets with Ordered Dropout
Samuel Horváth
Stefanos Laskaridis
Mario Almeida
Ilias Leondiadis
Stylianos I. Venieris
Nicholas D. Lane
165
267
0
26 Feb 2021
The Lottery Ticket Hypothesis for Pre-trained BERT Networks
The Lottery Ticket Hypothesis for Pre-trained BERT Networks
Tianlong Chen
Jonathan Frankle
Shiyu Chang
Sijia Liu
Yang Zhang
Zhangyang Wang
Michael Carbin
148
345
0
23 Jul 2020
1