Communities
Connect sessions
AI calendar
Organizations
Join Slack
Contact Sales
Search
Open menu
Home
Papers
2110.03313
Cited By
v1
v2
v3 (latest)
Distributed Methods with Compressed Communication for Solving Variational Inequalities, with Theoretical Guarantees
7 October 2021
Aleksandr Beznosikov
Peter Richtárik
Michael Diskin
Max Ryabinin
Alexander Gasnikov
FedML
Re-assign community
ArXiv (abs)
PDF
HTML
HuggingFace (1 upvotes)
Papers citing
"Distributed Methods with Compressed Communication for Solving Variational Inequalities, with Theoretical Guarantees"
13 / 13 papers shown
Layer-wise Quantization for Quantized Optimistic Dual Averaging
Anh Duc Nguyen
Ilia Markov
Frank Zhengqing Wu
Ali Ramezani-Kebrya
Kimon Antonakopoulos
Dan Alistarh
Volkan Cevher
MQ
302
1
0
20 May 2025
Accelerated Methods with Compressed Communications for Distributed Optimization Problems under Data Similarity
AAAI Conference on Artificial Intelligence (AAAI), 2024
Dmitry Bylinkin
Aleksandr Beznosikov
499
3
0
21 Dec 2024
Near-Optimal Distributed Minimax Optimization under the Second-Order Similarity
Qihao Zhou
Haishan Ye
Luo Luo
351
1
0
25 May 2024
Stochastic Extragradient with Random Reshuffling: Improved Convergence for Variational Inequalities
International Conference on Artificial Intelligence and Statistics (AISTATS), 2024
Konstantinos Emmanouilidis
René Vidal
Nicolas Loizou
215
4
0
11 Mar 2024
Communication Compression for Byzantine Robust Learning: New Efficient Algorithms and Improved Rates
Ahmad Rammal
Kaja Gruntkowska
Nikita Fedin
Eduard A. Gorbunov
Peter Richtárik
421
13
0
15 Oct 2023
Distributed Extra-gradient with Optimal Complexity and Communication Guarantees
International Conference on Learning Representations (ICLR), 2023
Ali Ramezani-Kebrya
Kimon Antonakopoulos
Igor Krawczuk
Justin Deschenaux
Volkan Cevher
333
4
0
17 Aug 2023
Towards a Better Theoretical Understanding of Independent Subnetwork Training
International Conference on Machine Learning (ICML), 2023
Egor Shulgin
Peter Richtárik
AI4CE
403
8
0
28 Jun 2023
Similarity, Compression and Local Steps: Three Pillars of Efficient Communications for Distributed Variational Inequalities
Neural Information Processing Systems (NeurIPS), 2023
Aleksandr Beznosikov
Martin Takáč
Alexander Gasnikov
374
14
0
15 Feb 2023
Federated Minimax Optimization with Client Heterogeneity
Pranay Sharma
Rohan Panda
Gauri Joshi
FedML
346
10
0
08 Feb 2023
Compression and Data Similarity: Combination of Two Techniques for Communication-Efficient Solving of Distributed Variational Inequalities
Aleksandr Beznosikov
Alexander Gasnikov
299
12
0
19 Jun 2022
Variance Reduction is an Antidote to Byzantines: Better Rates, Weaker Assumptions and Communication Compression as a Cherry on the Top
Eduard A. Gorbunov
Samuel Horváth
Peter Richtárik
Gauthier Gidel
AAML
371
0
0
01 Jun 2022
Federated Minimax Optimization: Improved Convergence Analyses and Algorithms
International Conference on Machine Learning (ICML), 2022
Pranay Sharma
Rohan Panda
Gauri Joshi
P. Varshney
FedML
405
61
0
09 Mar 2022
Stochastic Gradient Descent-Ascent: Unified Theory and New Efficient Methods
International Conference on Artificial Intelligence and Statistics (AISTATS), 2022
Aleksandr Beznosikov
Eduard A. Gorbunov
Hugo Berard
Nicolas Loizou
405
62
0
15 Feb 2022
1
Page 1 of 1