ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2302.09832
  4. Cited By
TAMUNA: Doubly Accelerated Distributed Optimization with Local Training,
  Compression, and Partial Participation

TAMUNA: Doubly Accelerated Distributed Optimization with Local Training, Compression, and Partial Participation

20 February 2023
Laurent Condat
Ivan Agarský
Grigory Malinovsky
Peter Richtárik
    FedML
ArXivPDFHTML

Papers citing "TAMUNA: Doubly Accelerated Distributed Optimization with Local Training, Compression, and Partial Participation"

9 / 9 papers shown
Title
Enhancing Privacy in Federated Learning through Local Training
Enhancing Privacy in Federated Learning through Local Training
Nicola Bastianello
Changxin Liu
Karl H. Johansson
38
2
0
26 Mar 2024
FedComLoc: Communication-Efficient Distributed Training of Sparse and
  Quantized Models
FedComLoc: Communication-Efficient Distributed Training of Sparse and Quantized Models
Kai Yi
Georg Meinhardt
Laurent Condat
Peter Richtárik
FedML
32
6
0
14 Mar 2024
Correlated Quantization for Faster Nonconvex Distributed Optimization
Correlated Quantization for Faster Nonconvex Distributed Optimization
Andrei Panferov
Yury Demidovich
Ahmad Rammal
Peter Richtárik
MQ
23
4
0
10 Jan 2024
Revisiting Decentralized ProxSkip: Achieving Linear Speedup
Revisiting Decentralized ProxSkip: Achieving Linear Speedup
Luyao Guo
Sulaiman A. Alghunaim
Kun Yuan
Laurent Condat
Jinde Cao
FedML
24
1
0
12 Oct 2023
EF21 with Bells & Whistles: Practical Algorithmic Extensions of Modern
  Error Feedback
EF21 with Bells & Whistles: Practical Algorithmic Extensions of Modern Error Feedback
Ilyas Fatkhullin
Igor Sokolov
Eduard A. Gorbunov
Zhize Li
Peter Richtárik
42
44
0
07 Oct 2021
A Field Guide to Federated Optimization
A Field Guide to Federated Optimization
Jianyu Wang
Zachary B. Charles
Zheng Xu
Gauri Joshi
H. B. McMahan
...
Mi Zhang
Tong Zhang
Chunxiang Zheng
Chen Zhu
Wennan Zhu
FedML
173
410
0
14 Jul 2021
Linear Convergence in Federated Learning: Tackling Client Heterogeneity
  and Sparse Gradients
Linear Convergence in Federated Learning: Tackling Client Heterogeneity and Sparse Gradients
A. Mitra
Rayana H. Jaafar
George J. Pappas
Hamed Hassani
FedML
55
157
0
14 Feb 2021
Linearly Converging Error Compensated SGD
Linearly Converging Error Compensated SGD
Eduard A. Gorbunov
D. Kovalev
Dmitry Makarenko
Peter Richtárik
163
77
0
23 Oct 2020
FedPAQ: A Communication-Efficient Federated Learning Method with
  Periodic Averaging and Quantization
FedPAQ: A Communication-Efficient Federated Learning Method with Periodic Averaging and Quantization
Amirhossein Reisizadeh
Aryan Mokhtari
Hamed Hassani
Ali Jadbabaie
Ramtin Pedarsani
FedML
157
758
0
28 Sep 2019
1