ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2305.13170
  4. Cited By
Explicit Personalization and Local Training: Double Communication
  Acceleration in Federated Learning

Explicit Personalization and Local Training: Double Communication Acceleration in Federated Learning

22 May 2023
Kai Yi
Laurent Condat
Peter Richtárik
    FedML
ArXivPDFHTML

Papers citing "Explicit Personalization and Local Training: Double Communication Acceleration in Federated Learning"

4 / 4 papers shown
Title
GradSkip: Communication-Accelerated Local Gradient Methods with Better
  Computational Complexity
GradSkip: Communication-Accelerated Local Gradient Methods with Better Computational Complexity
A. Maranjyan
M. Safaryan
Peter Richtárik
24
13
0
28 Oct 2022
A Field Guide to Federated Optimization
A Field Guide to Federated Optimization
Jianyu Wang
Zachary B. Charles
Zheng Xu
Gauri Joshi
H. B. McMahan
...
Mi Zhang
Tong Zhang
Chunxiang Zheng
Chen Zhu
Wennan Zhu
FedML
173
410
0
14 Jul 2021
FjORD: Fair and Accurate Federated Learning under heterogeneous targets
  with Ordered Dropout
FjORD: Fair and Accurate Federated Learning under heterogeneous targets with Ordered Dropout
Samuel Horváth
Stefanos Laskaridis
Mario Almeida
Ilias Leondiadis
Stylianos I. Venieris
Nicholas D. Lane
176
267
0
26 Feb 2021
Linear Convergence in Federated Learning: Tackling Client Heterogeneity
  and Sparse Gradients
Linear Convergence in Federated Learning: Tackling Client Heterogeneity and Sparse Gradients
A. Mitra
Rayana H. Jaafar
George J. Pappas
Hamed Hassani
FedML
55
157
0
14 Feb 2021
1