ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2102.07845
  4. Cited By
MARINA: Faster Non-Convex Distributed Learning with Compression
v1v2v3 (latest)

MARINA: Faster Non-Convex Distributed Learning with Compression

International Conference on Machine Learning (ICML), 2021
15 February 2021
Eduard A. Gorbunov
Konstantin Burlachenko
Zhize Li
Peter Richtárik
ArXiv (abs)PDFHTML

Papers citing "MARINA: Faster Non-Convex Distributed Learning with Compression"

25 / 75 papers shown
Title
FedShuffle: Recipes for Better Use of Local Work in Federated Learning
FedShuffle: Recipes for Better Use of Local Work in Federated Learning
Samuel Horváth
Maziar Sanjabi
Lin Xiao
Peter Richtárik
Michael G. Rabbat
FedML
229
22
0
27 Apr 2022
Privacy-Aware Compression for Federated Data Analysis
Privacy-Aware Compression for Federated Data AnalysisConference on Uncertainty in Artificial Intelligence (UAI), 2022
Kamalika Chaudhuri
Chuan Guo
Michael G. Rabbat
FedML
236
28
0
15 Mar 2022
Stochastic Gradient Descent-Ascent: Unified Theory and New Efficient
  Methods
Stochastic Gradient Descent-Ascent: Unified Theory and New Efficient MethodsInternational Conference on Artificial Intelligence and Statistics (AISTATS), 2022
Aleksandr Beznosikov
Eduard A. Gorbunov
Hugo Berard
Nicolas Loizou
274
58
0
15 Feb 2022
FL_PyTorch: optimization research simulator for federated learning
FL_PyTorch: optimization research simulator for federated learning
Konstantin Burlachenko
Samuel Horváth
Peter Richtárik
FedML
256
18
0
07 Feb 2022
DASHA: Distributed Nonconvex Optimization with Communication
  Compression, Optimal Oracle Complexity, and No Client Synchronization
DASHA: Distributed Nonconvex Optimization with Communication Compression, Optimal Oracle Complexity, and No Client Synchronization
Alexander Tyurin
Peter Richtárik
336
21
0
02 Feb 2022
3PC: Three Point Compressors for Communication-Efficient Distributed
  Training and a Better Theory for Lazy Aggregation
3PC: Three Point Compressors for Communication-Efficient Distributed Training and a Better Theory for Lazy AggregationInternational Conference on Machine Learning (ICML), 2022
Peter Richtárik
Igor Sokolov
Ilyas Fatkhullin
Elnur Gasanov
Zhize Li
Eduard A. Gorbunov
182
33
0
02 Feb 2022
BEER: Fast $O(1/T)$ Rate for Decentralized Nonconvex Optimization with
  Communication Compression
BEER: Fast O(1/T)O(1/T)O(1/T) Rate for Decentralized Nonconvex Optimization with Communication CompressionNeural Information Processing Systems (NeurIPS), 2022
Haoyu Zhao
Boyue Li
Zhize Li
Peter Richtárik
Yuejie Chi
225
62
0
31 Jan 2022
Server-Side Stepsizes and Sampling Without Replacement Provably Help in
  Federated Optimization
Server-Side Stepsizes and Sampling Without Replacement Provably Help in Federated Optimization
Grigory Malinovsky
Konstantin Mishchenko
Peter Richtárik
FedML
141
28
0
26 Jan 2022
Faster Rates for Compressed Federated Learning with Client-Variance
  Reduction
Faster Rates for Compressed Federated Learning with Client-Variance ReductionSIAM Journal on Mathematics of Data Science (SIMODS), 2021
Haoyu Zhao
Konstantin Burlachenko
Zhize Li
Peter Richtárik
FedML
325
19
0
24 Dec 2021
DSAG: A mixed synchronous-asynchronous iterative method for
  straggler-resilient learning
DSAG: A mixed synchronous-asynchronous iterative method for straggler-resilient learningIEEE Transactions on Communications (IEEE Trans. Commun.), 2021
A. Severinson
E. Rosnes
S. E. Rouayheb
Alexandre Graell i Amat
151
2
0
27 Nov 2021
Distributed Methods with Compressed Communication for Solving
  Variational Inequalities, with Theoretical Guarantees
Distributed Methods with Compressed Communication for Solving Variational Inequalities, with Theoretical Guarantees
Aleksandr Beznosikov
Peter Richtárik
Michael Diskin
Max Ryabinin
Alexander Gasnikov
FedML
238
22
0
07 Oct 2021
EF21 with Bells & Whistles: Six Algorithmic Extensions of Modern Error Feedback
EF21 with Bells & Whistles: Six Algorithmic Extensions of Modern Error Feedback
Ilyas Fatkhullin
Igor Sokolov
Eduard A. Gorbunov
Zhize Li
Peter Richtárik
301
47
0
07 Oct 2021
EDEN: Communication-Efficient and Robust Distributed Mean Estimation for
  Federated Learning
EDEN: Communication-Efficient and Robust Distributed Mean Estimation for Federated Learning
S. Vargaftik
Ran Ben-Basat
Amit Portnoy
Gal Mendelson
Y. Ben-Itzhak
Michael Mitzenmacher
FedML
228
55
0
19 Aug 2021
FedPAGE: A Fast Local Stochastic Gradient Method for
  Communication-Efficient Federated Learning
FedPAGE: A Fast Local Stochastic Gradient Method for Communication-Efficient Federated Learning
Haoyu Zhao
Zhize Li
Peter Richtárik
FedML
147
34
0
10 Aug 2021
CANITA: Faster Rates for Distributed Convex Optimization with
  Communication Compression
CANITA: Faster Rates for Distributed Convex Optimization with Communication CompressionNeural Information Processing Systems (NeurIPS), 2021
Zhize Li
Peter Richtárik
160
35
0
20 Jul 2021
Secure Distributed Training at Scale
Secure Distributed Training at ScaleInternational Conference on Machine Learning (ICML), 2021
Eduard A. Gorbunov
Alexander Borzunov
Michael Diskin
Max Ryabinin
FedML
303
17
0
21 Jun 2021
EF21: A New, Simpler, Theoretically Better, and Practically Faster Error
  Feedback
EF21: A New, Simpler, Theoretically Better, and Practically Faster Error FeedbackNeural Information Processing Systems (NeurIPS), 2021
Peter Richtárik
Igor Sokolov
Ilyas Fatkhullin
156
172
0
09 Jun 2021
Fast Federated Learning in the Presence of Arbitrary Device
  Unavailability
Fast Federated Learning in the Presence of Arbitrary Device UnavailabilityNeural Information Processing Systems (NeurIPS), 2021
Xinran Gu
Kaixuan Huang
Jingzhao Zhang
Longbo Huang
FedML
140
119
0
08 Jun 2021
Theoretically Better and Numerically Faster Distributed Optimization
  with Smoothness-Aware Quantization Techniques
Theoretically Better and Numerically Faster Distributed Optimization with Smoothness-Aware Quantization TechniquesNeural Information Processing Systems (NeurIPS), 2021
Bokun Wang
M. Safaryan
Peter Richtárik
MQ
156
11
0
07 Jun 2021
FedNL: Making Newton-Type Methods Applicable to Federated Learning
FedNL: Making Newton-Type Methods Applicable to Federated LearningInternational Conference on Machine Learning (ICML), 2021
M. Safaryan
Rustem Islamov
Xun Qian
Peter Richtárik
FedML
188
84
0
05 Jun 2021
Moshpit SGD: Communication-Efficient Decentralized Training on
  Heterogeneous Unreliable Devices
Moshpit SGD: Communication-Efficient Decentralized Training on Heterogeneous Unreliable DevicesNeural Information Processing Systems (NeurIPS), 2021
Max Ryabinin
Eduard A. Gorbunov
Vsevolod Plokhotnyuk
Gennady Pekhimenko
304
43
0
04 Mar 2021
Recent Theoretical Advances in Non-Convex Optimization
Recent Theoretical Advances in Non-Convex Optimization
Marina Danilova
Pavel Dvurechensky
Alexander Gasnikov
Eduard A. Gorbunov
Sergey Guminov
Dmitry Kamzolov
Innokentiy Shibaev
290
101
0
11 Dec 2020
Faster Non-Convex Federated Learning via Global and Local Momentum
Faster Non-Convex Federated Learning via Global and Local Momentum
Rudrajit Das
Anish Acharya
Abolfazl Hashemi
Sujay Sanghavi
Inderjit S. Dhillon
Ufuk Topcu
FedML
429
91
0
07 Dec 2020
PAGE: A Simple and Optimal Probabilistic Gradient Estimator for
  Nonconvex Optimization
PAGE: A Simple and Optimal Probabilistic Gradient Estimator for Nonconvex OptimizationInternational Conference on Machine Learning (ICML), 2020
Zhize Li
Hongyan Bao
Xiangliang Zhang
Peter Richtárik
ODL
288
149
0
25 Aug 2020
Differentially Quantized Gradient Methods
Differentially Quantized Gradient MethodsIEEE Transactions on Information Theory (IEEE Trans. Inf. Theory), 2020
Chung-Yi Lin
V. Kostina
B. Hassibi
MQ
270
8
0
06 Feb 2020
Previous
12