ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2110.03313
  4. Cited By
Distributed Methods with Compressed Communication for Solving
  Variational Inequalities, with Theoretical Guarantees
v1v2v3 (latest)

Distributed Methods with Compressed Communication for Solving Variational Inequalities, with Theoretical Guarantees

7 October 2021
Aleksandr Beznosikov
Peter Richtárik
Michael Diskin
Max Ryabinin
Alexander Gasnikov
    FedML
ArXiv (abs)PDFHTMLHuggingFace (1 upvotes)

Papers citing "Distributed Methods with Compressed Communication for Solving Variational Inequalities, with Theoretical Guarantees"

13 / 13 papers shown
Layer-wise Quantization for Quantized Optimistic Dual Averaging
Layer-wise Quantization for Quantized Optimistic Dual Averaging
Anh Duc Nguyen
Ilia Markov
Frank Zhengqing Wu
Ali Ramezani-Kebrya
Kimon Antonakopoulos
Dan Alistarh
Volkan Cevher
MQ
302
1
0
20 May 2025
Accelerated Methods with Compressed Communications for Distributed
  Optimization Problems under Data Similarity
Accelerated Methods with Compressed Communications for Distributed Optimization Problems under Data SimilarityAAAI Conference on Artificial Intelligence (AAAI), 2024
Dmitry Bylinkin
Aleksandr Beznosikov
499
3
0
21 Dec 2024
Near-Optimal Distributed Minimax Optimization under the Second-Order
  Similarity
Near-Optimal Distributed Minimax Optimization under the Second-Order Similarity
Qihao Zhou
Haishan Ye
Luo Luo
351
1
0
25 May 2024
Stochastic Extragradient with Random Reshuffling: Improved Convergence
  for Variational Inequalities
Stochastic Extragradient with Random Reshuffling: Improved Convergence for Variational InequalitiesInternational Conference on Artificial Intelligence and Statistics (AISTATS), 2024
Konstantinos Emmanouilidis
René Vidal
Nicolas Loizou
215
4
0
11 Mar 2024
Communication Compression for Byzantine Robust Learning: New Efficient
  Algorithms and Improved Rates
Communication Compression for Byzantine Robust Learning: New Efficient Algorithms and Improved Rates
Ahmad Rammal
Kaja Gruntkowska
Nikita Fedin
Eduard A. Gorbunov
Peter Richtárik
421
13
0
15 Oct 2023
Distributed Extra-gradient with Optimal Complexity and Communication
  Guarantees
Distributed Extra-gradient with Optimal Complexity and Communication GuaranteesInternational Conference on Learning Representations (ICLR), 2023
Ali Ramezani-Kebrya
Kimon Antonakopoulos
Igor Krawczuk
Justin Deschenaux
Volkan Cevher
333
4
0
17 Aug 2023
Towards a Better Theoretical Understanding of Independent Subnetwork
  Training
Towards a Better Theoretical Understanding of Independent Subnetwork TrainingInternational Conference on Machine Learning (ICML), 2023
Egor Shulgin
Peter Richtárik
AI4CE
403
8
0
28 Jun 2023
Similarity, Compression and Local Steps: Three Pillars of Efficient
  Communications for Distributed Variational Inequalities
Similarity, Compression and Local Steps: Three Pillars of Efficient Communications for Distributed Variational InequalitiesNeural Information Processing Systems (NeurIPS), 2023
Aleksandr Beznosikov
Martin Takáč
Alexander Gasnikov
374
14
0
15 Feb 2023
Federated Minimax Optimization with Client Heterogeneity
Federated Minimax Optimization with Client Heterogeneity
Pranay Sharma
Rohan Panda
Gauri Joshi
FedML
346
10
0
08 Feb 2023
Compression and Data Similarity: Combination of Two Techniques for
  Communication-Efficient Solving of Distributed Variational Inequalities
Compression and Data Similarity: Combination of Two Techniques for Communication-Efficient Solving of Distributed Variational Inequalities
Aleksandr Beznosikov
Alexander Gasnikov
299
12
0
19 Jun 2022
Variance Reduction is an Antidote to Byzantines: Better Rates, Weaker
  Assumptions and Communication Compression as a Cherry on the Top
Variance Reduction is an Antidote to Byzantines: Better Rates, Weaker Assumptions and Communication Compression as a Cherry on the Top
Eduard A. Gorbunov
Samuel Horváth
Peter Richtárik
Gauthier Gidel
AAML
371
0
0
01 Jun 2022
Federated Minimax Optimization: Improved Convergence Analyses and
  Algorithms
Federated Minimax Optimization: Improved Convergence Analyses and AlgorithmsInternational Conference on Machine Learning (ICML), 2022
Pranay Sharma
Rohan Panda
Gauri Joshi
P. Varshney
FedML
405
61
0
09 Mar 2022
Stochastic Gradient Descent-Ascent: Unified Theory and New Efficient
  Methods
Stochastic Gradient Descent-Ascent: Unified Theory and New Efficient MethodsInternational Conference on Artificial Intelligence and Statistics (AISTATS), 2022
Aleksandr Beznosikov
Eduard A. Gorbunov
Hugo Berard
Nicolas Loizou
405
62
0
15 Feb 2022
1
Page 1 of 1