ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2407.20557
  4. Cited By
CELLM: An Efficient Communication in Large Language Models Training for
  Federated Learning

CELLM: An Efficient Communication in Large Language Models Training for Federated Learning

30 July 2024
Raja Vavekanand
Kira Sam
ArXivPDFHTML

Papers citing "CELLM: An Efficient Communication in Large Language Models Training for Federated Learning"

9 / 9 papers shown
Title
Multilingual Large Language Model: A Survey of Resources, Taxonomy and
  Frontiers
Multilingual Large Language Model: A Survey of Resources, Taxonomy and Frontiers
Libo Qin
Qiguang Chen
Yuhang Zhou
Zhi Chen
Yinghui Li
Lizi Liao
Min Li
Wanxiang Che
Philip S. Yu
LRM
47
36
0
07 Apr 2024
Adaptive Federated Pruning in Hierarchical Wireless Networks
Adaptive Federated Pruning in Hierarchical Wireless Networks
Xiaonan Liu
Shiqiang Wang
Yansha Deng
A. Nallanathan
27
11
0
15 May 2023
How to DP-fy ML: A Practical Guide to Machine Learning with Differential
  Privacy
How to DP-fy ML: A Practical Guide to Machine Learning with Differential Privacy
Natalia Ponomareva
Hussein Hazimeh
Alexey Kurakin
Zheng Xu
Carson E. Denison
H. B. McMahan
Sergei Vassilvitskii
Steve Chien
Abhradeep Thakurta
94
167
0
01 Mar 2023
Conquering the Communication Constraints to Enable Large Pre-Trained
  Models in Federated Learning
Conquering the Communication Constraints to Enable Large Pre-Trained Models in Federated Learning
Guangyu Sun
Umar Khalid
Matías Mendieta
Taojiannan Yang
C. L. P. Chen
FedML
67
13
0
04 Oct 2022
Papaya: Practical, Private, and Scalable Federated Learning
Papaya: Practical, Private, and Scalable Federated Learning
Dzmitry Huba
John Nguyen
Kshitiz Malik
Ruiyu Zhu
Michael G. Rabbat
...
H. Srinivas
Kaikai Wang
Anthony Shoumikhin
Jesik Min
Mani Malek
FedML
99
135
0
08 Nov 2021
The Power of Scale for Parameter-Efficient Prompt Tuning
The Power of Scale for Parameter-Efficient Prompt Tuning
Brian Lester
Rami Al-Rfou
Noah Constant
VPVLM
280
3,835
0
18 Apr 2021
Towards Personalized Federated Learning
Towards Personalized Federated Learning
A. Tan
Han Yu
Li-zhen Cui
Qiang Yang
FedML
AI4CE
183
840
0
01 Mar 2021
Scaling Laws for Neural Language Models
Scaling Laws for Neural Language Models
Jared Kaplan
Sam McCandlish
T. Henighan
Tom B. Brown
B. Chess
R. Child
Scott Gray
Alec Radford
Jeff Wu
Dario Amodei
226
4,424
0
23 Jan 2020
FedPAQ: A Communication-Efficient Federated Learning Method with
  Periodic Averaging and Quantization
FedPAQ: A Communication-Efficient Federated Learning Method with Periodic Averaging and Quantization
Amirhossein Reisizadeh
Aryan Mokhtari
Hamed Hassani
Ali Jadbabaie
Ramtin Pedarsani
FedML
157
758
0
28 Sep 2019
1