Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2102.07158
Cited By
Distributed Second Order Methods with Fast Rates and Compressed Communication
14 February 2021
Rustem Islamov
Xun Qian
Peter Richtárik
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Distributed Second Order Methods with Fast Rates and Compressed Communication"
26 / 26 papers shown
Title
GP-FL: Model-Based Hessian Estimation for Second-Order Over-the-Air Federated Learning
Shayan Mohajer Hamidi
Ali Bereyhi
S. Asaad
H. Vincent Poor
62
1
0
05 Dec 2024
Natural Policy Gradient and Actor Critic Methods for Constrained Multi-Task Reinforcement Learning
Sihan Zeng
Thinh T. Doan
Justin Romberg
19
0
0
03 May 2024
Distributed Adaptive Greedy Quasi-Newton Methods with Explicit Non-asymptotic Convergence Bounds
Yubo Du
Keyou You
39
4
0
30 Nov 2023
FedECA: A Federated External Control Arm Method for Causal Inference with Time-To-Event Data in Distributed Settings
Jean Ogier du Terrail
Quentin Klopfenstein
Honghao Li
Imke Mayer
Nicolas Loiseau
Mohammad Hallal
Félix Balazard
M. Andreux
14
2
0
28 Nov 2023
Communication Compression for Byzantine Robust Learning: New Efficient Algorithms and Improved Rates
Ahmad Rammal
Kaja Gruntkowska
Nikita Fedin
Eduard A. Gorbunov
Peter Richtárik
32
5
0
15 Oct 2023
Improved Communication Efficiency in Federated Natural Policy Gradient via ADMM-based Gradient Updates
Guangchen Lan
Han Wang
James Anderson
Christopher G. Brinton
Vaneet Aggarwal
FedML
11
27
0
09 Oct 2023
Efficient Federated Learning via Local Adaptive Amended Optimizer with Linear Speedup
Yan Sun
Li Shen
Hao Sun
Liang Ding
Dacheng Tao
FedML
19
16
0
30 Jul 2023
Towards a Better Theoretical Understanding of Independent Subnetwork Training
Egor Shulgin
Peter Richtárik
AI4CE
16
6
0
28 Jun 2023
Communication Acceleration of Local Gradient Methods via an Accelerated Primal-Dual Algorithm with Inexact Prox
Abdurakhmon Sadiev
D. Kovalev
Peter Richtárik
15
20
0
08 Jul 2022
FedSSO: A Federated Server-Side Second-Order Optimization Algorithm
Xinteng Ma
Renyi Bao
Jinpeng Jiang
Yang Liu
Arthur Jiang
Junhua Yan
Xin Liu
Zhisong Pan
FedML
13
6
0
20 Jun 2022
Distributed Newton-Type Methods with Communication Compression and Bernoulli Aggregation
Rustem Islamov
Xun Qian
Slavomír Hanzely
M. Safaryan
Peter Richtárik
22
16
0
07 Jun 2022
Variance Reduction is an Antidote to Byzantines: Better Rates, Weaker Assumptions and Communication Compression as a Cherry on the Top
Eduard A. Gorbunov
Samuel Horváth
Peter Richtárik
Gauthier Gidel
AAML
16
0
0
01 Jun 2022
Hessian Averaging in Stochastic Newton Methods Achieves Superlinear Convergence
Sen Na
Michal Derezinski
Michael W. Mahoney
14
16
0
20 Apr 2022
SHED: A Newton-type algorithm for federated learning based on incremental Hessian eigenvector sharing
Nicolò Dal Fabbro
S. Dey
M. Rossi
Luca Schenato
FedML
21
14
0
11 Feb 2022
Basis Matters: Better Communication-Efficient Second Order Methods for Federated Learning
Xun Qian
Rustem Islamov
M. Safaryan
Peter Richtárik
FedML
8
23
0
02 Nov 2021
Distributed Principal Component Analysis with Limited Communication
Foivos Alimisis
Peter Davies
Bart Vandereycken
Dan Alistarh
19
11
0
27 Oct 2021
What Do We Mean by Generalization in Federated Learning?
Honglin Yuan
Warren Morningstar
Lin Ning
K. Singhal
OOD
FedML
15
71
0
27 Oct 2021
A Stochastic Newton Algorithm for Distributed Convex Optimization
Brian Bullins
Kumar Kshitij Patel
Ohad Shamir
Nathan Srebro
Blake E. Woodworth
16
15
0
07 Oct 2021
EF21 with Bells & Whistles: Practical Algorithmic Extensions of Modern Error Feedback
Ilyas Fatkhullin
Igor Sokolov
Eduard A. Gorbunov
Zhize Li
Peter Richtárik
42
44
0
07 Oct 2021
On Second-order Optimization Methods for Federated Learning
Sebastian Bischoff
Stephan Günnemann
Martin Jaggi
Sebastian U. Stich
FedML
16
10
0
06 Sep 2021
EF21: A New, Simpler, Theoretically Better, and Practically Faster Error Feedback
Peter Richtárik
Igor Sokolov
Ilyas Fatkhullin
15
138
0
09 Jun 2021
FedNL: Making Newton-Type Methods Applicable to Federated Learning
M. Safaryan
Rustem Islamov
Xun Qian
Peter Richtárik
FedML
14
77
0
05 Jun 2021
Communication-Efficient Distributed Optimization with Quantized Preconditioners
Foivos Alimisis
Peter Davies
Dan Alistarh
11
15
0
14 Feb 2021
Towards Tight Communication Lower Bounds for Distributed Optimisation
Dan Alistarh
Janne H. Korhonen
FedML
12
6
0
16 Oct 2020
Differentially Quantized Gradient Methods
Chung-Yi Lin
V. Kostina
B. Hassibi
MQ
12
7
0
06 Feb 2020
A Proximal Stochastic Gradient Method with Progressive Variance Reduction
Lin Xiao
Tong Zhang
ODL
76
736
0
19 Mar 2014
1