ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2012.05625
  4. Cited By
DONE: Distributed Approximate Newton-type Method for Federated Edge
  Learning
v1v2v3v4 (latest)

DONE: Distributed Approximate Newton-type Method for Federated Edge Learning

IEEE Transactions on Parallel and Distributed Systems (TPDS), 2020
10 December 2020
Canh T. Dinh
N. H. Tran
Tuan Dung Nguyen
Wei Bao
A. R. Balef
B. Zhou
Albert Y. Zomaya
    FedML
ArXiv (abs)PDFHTML

Papers citing "DONE: Distributed Approximate Newton-type Method for Federated Edge Learning"

8 / 8 papers shown
Accelerated Training of Federated Learning via Second-Order Methods
Accelerated Training of Federated Learning via Second-Order Methods
Mrinmay Sen
Sidhant R Nair
C Krishna Mohan
FedML
256
0
0
29 May 2025
Review of Mathematical Optimization in Federated Learning
Review of Mathematical Optimization in Federated Learning
Shusen Yang
Fangyuan Zhao
Zihao Zhou
Liang Shi
Xuebin Ren
Zongben Xu
FedMLAI4CE
418
6
0
02 Dec 2024
Scalable and Resource-Efficient Second-Order Federated Learning via Over-the-Air Aggregation
Scalable and Resource-Efficient Second-Order Federated Learning via Over-the-Air AggregationIEEE Wireless Communications Letters (WCL), 2024
Abdulmomen Ghalkha
Chaouki Ben Issaid
Mehdi Bennis
381
1
0
10 Oct 2024
Fed-Sophia: A Communication-Efficient Second-Order Federated Learning
  Algorithm
Fed-Sophia: A Communication-Efficient Second-Order Federated Learning Algorithm
Ahmed Elbakary
Chaouki Ben Issaid
Mohammad Shehab
Karim G. Seddik
Tamer A. ElBatt
Mehdi Bennis
300
8
0
10 Jun 2024
FAGH: Accelerating Federated Learning with Approximated Global Hessian
FAGH: Accelerating Federated Learning with Approximated Global Hessian
Mrinmay Sen
A. K. Qin
Krishna Mohan
FedML
305
2
0
16 Mar 2024
Q-SHED: Distributed Optimization at the Edge via Hessian Eigenvectors
  Quantization
Q-SHED: Distributed Optimization at the Edge via Hessian Eigenvectors Quantization
Nicolò Dal Fabbro
M. Rossi
Luca Schenato
S. Dey
351
1
0
18 May 2023
Network-GIANT: Fully distributed Newton-type optimization via harmonic
  Hessian consensus
Network-GIANT: Fully distributed Newton-type optimization via harmonic Hessian consensus
A. Maritan
Ganesh Sharma
Luca Schenato
S. Dey
317
5
0
13 May 2023
SHED: A Newton-type algorithm for federated learning based on
  incremental Hessian eigenvector sharing
SHED: A Newton-type algorithm for federated learning based on incremental Hessian eigenvector sharing
Nicolò Dal Fabbro
S. Dey
M. Rossi
Luca Schenato
FedML
449
19
0
11 Feb 2022
1
Page 1 of 1