ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2106.02969
  4. Cited By
FedNL: Making Newton-Type Methods Applicable to Federated Learning

FedNL: Making Newton-Type Methods Applicable to Federated Learning

5 June 2021
M. Safaryan
Rustem Islamov
Xun Qian
Peter Richtárik
    FedML
ArXivPDFHTML

Papers citing "FedNL: Making Newton-Type Methods Applicable to Federated Learning"

14 / 14 papers shown
Title
Accelerated Distributed Optimization with Compression and Error Feedback
Accelerated Distributed Optimization with Compression and Error Feedback
Yuan Gao
Anton Rodomanov
Jeremy Rack
Sebastian U. Stich
49
0
0
11 Mar 2025
Sketched Adaptive Federated Deep Learning: A Sharp Convergence Analysis
Sketched Adaptive Federated Deep Learning: A Sharp Convergence Analysis
Zhijie Chen
Qiaobo Li
A. Banerjee
FedML
35
0
0
11 Nov 2024
Federated Cubic Regularized Newton Learning with Sparsification-amplified Differential Privacy
Federated Cubic Regularized Newton Learning with Sparsification-amplified Differential Privacy
Wei Huo
Changxin Liu
Kemi Ding
Karl H. Johansson
Ling Shi
FedML
37
0
0
08 Aug 2024
Matrix Compression via Randomized Low Rank and Low Precision
  Factorization
Matrix Compression via Randomized Low Rank and Low Precision Factorization
R. Saha
Varun Srivastava
Mert Pilanci
23
19
0
17 Oct 2023
Communication Compression for Byzantine Robust Learning: New Efficient
  Algorithms and Improved Rates
Communication Compression for Byzantine Robust Learning: New Efficient Algorithms and Improved Rates
Ahmad Rammal
Kaja Gruntkowska
Nikita Fedin
Eduard A. Gorbunov
Peter Richtárik
45
5
0
15 Oct 2023
Efficient Federated Learning via Local Adaptive Amended Optimizer with
  Linear Speedup
Efficient Federated Learning via Local Adaptive Amended Optimizer with Linear Speedup
Yan Sun
Li Shen
Hao Sun
Liang Ding
Dacheng Tao
FedML
24
17
0
30 Jul 2023
Q-SHED: Distributed Optimization at the Edge via Hessian Eigenvectors
  Quantization
Q-SHED: Distributed Optimization at the Edge via Hessian Eigenvectors Quantization
Nicolò Dal Fabbro
M. Rossi
Luca Schenato
S. Dey
21
0
0
18 May 2023
Network-GIANT: Fully distributed Newton-type optimization via harmonic
  Hessian consensus
Network-GIANT: Fully distributed Newton-type optimization via harmonic Hessian consensus
A. Maritan
Ganesh Sharma
Luca Schenato
S. Dey
25
2
0
13 May 2023
PersA-FL: Personalized Asynchronous Federated Learning
PersA-FL: Personalized Asynchronous Federated Learning
Taha Toghani
Soomin Lee
César A. Uribe
FedML
34
6
0
03 Oct 2022
FedSSO: A Federated Server-Side Second-Order Optimization Algorithm
FedSSO: A Federated Server-Side Second-Order Optimization Algorithm
Xinteng Ma
Renyi Bao
Jinpeng Jiang
Yang Liu
Arthur Jiang
Junhua Yan
Xin Liu
Zhisong Pan
FedML
32
6
0
20 Jun 2022
Hessian Averaging in Stochastic Newton Methods Achieves Superlinear
  Convergence
Hessian Averaging in Stochastic Newton Methods Achieves Superlinear Convergence
Sen Na
Michal Derezinski
Michael W. Mahoney
27
16
0
20 Apr 2022
SHED: A Newton-type algorithm for federated learning based on
  incremental Hessian eigenvector sharing
SHED: A Newton-type algorithm for federated learning based on incremental Hessian eigenvector sharing
Nicolò Dal Fabbro
S. Dey
M. Rossi
Luca Schenato
FedML
29
14
0
11 Feb 2022
Basis Matters: Better Communication-Efficient Second Order Methods for
  Federated Learning
Basis Matters: Better Communication-Efficient Second Order Methods for Federated Learning
Xun Qian
Rustem Islamov
M. Safaryan
Peter Richtárik
FedML
24
23
0
02 Nov 2021
Linearly Converging Error Compensated SGD
Linearly Converging Error Compensated SGD
Eduard A. Gorbunov
D. Kovalev
Dmitry Makarenko
Peter Richtárik
163
77
0
23 Oct 2020
1