ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2308.00522
  4. Cited By
Efficient Federated Learning via Local Adaptive Amended Optimizer with
  Linear Speedup

Efficient Federated Learning via Local Adaptive Amended Optimizer with Linear Speedup

30 July 2023
Yan Sun
Li Shen
Hao Sun
Liang Ding
Dacheng Tao
    FedML
ArXivPDFHTML

Papers citing "Efficient Federated Learning via Local Adaptive Amended Optimizer with Linear Speedup"

12 / 12 papers shown
Title
Convergence Analysis of Asynchronous Federated Learning with Gradient Compression for Non-Convex Optimization
Convergence Analysis of Asynchronous Federated Learning with Gradient Compression for Non-Convex Optimization
Diying Yang
Yingwei Hou
Danyang Xiao
Weigang Wu
FedML
34
0
0
28 Apr 2025
Decentralized Nonconvex Composite Federated Learning with Gradient Tracking and Momentum
Decentralized Nonconvex Composite Federated Learning with Gradient Tracking and Momentum
Yuan Zhou
Xinli Shi
Xuelong Li
Jiachen Zhong
G. Wen
Jinde Cao
FedML
39
0
0
17 Apr 2025
Accelerating Energy-Efficient Federated Learning in Cell-Free Networks with Adaptive Quantization
Accelerating Energy-Efficient Federated Learning in Cell-Free Networks with Adaptive Quantization
Afsaneh Mahmoudi
Ming Xiao
Emil Björnson
46
0
0
31 Dec 2024
FedSat: A Statistical Aggregation Approach for Class Imbalanced Clients in Federated Learning
FedSat: A Statistical Aggregation Approach for Class Imbalanced Clients in Federated Learning
S. Chowdhury
Raju Halder
FedML
27
0
0
31 Dec 2024
Personalized Quantum Federated Learning for Privacy Image Classification
Personalized Quantum Federated Learning for Privacy Image Classification
Jinjing Shi
Tian Chen
Shichao Zhang
Xuelong Li
FedML
21
0
0
03 Oct 2024
A-FedPD: Aligning Dual-Drift is All Federated Primal-Dual Learning Needs
A-FedPD: Aligning Dual-Drift is All Federated Primal-Dual Learning Needs
Yan Sun
Li Shen
Dacheng Tao
FedML
18
0
0
27 Sep 2024
FADAS: Towards Federated Adaptive Asynchronous Optimization
FADAS: Towards Federated Adaptive Asynchronous Optimization
Yujia Wang
Shiqiang Wang
Songtao Lu
Jinghui Chen
FedML
21
3
0
25 Jul 2024
Locally Estimated Global Perturbations are Better than Local
  Perturbations for Federated Sharpness-aware Minimization
Locally Estimated Global Perturbations are Better than Local Perturbations for Federated Sharpness-aware Minimization
Ziqing Fan
Shengchao Hu
Jiangchao Yao
Gang Niu
Ya-Qin Zhang
Masashi Sugiyama
Yanfeng Wang
FedML
31
11
0
29 May 2024
Federated Learning with Manifold Regularization and Normalized Update
  Reaggregation
Federated Learning with Manifold Regularization and Normalized Update Reaggregation
Xuming An
Li Shen
Han Hu
Yong Luo
FedML
28
4
0
10 Nov 2023
Understanding How Consistency Works in Federated Learning via Stage-wise
  Relaxed Initialization
Understanding How Consistency Works in Federated Learning via Stage-wise Relaxed Initialization
Yan Sun
Li Shen
Dacheng Tao
FedML
9
14
0
09 Jun 2023
SMU: smooth activation function for deep networks using smoothing
  maximum technique
SMU: smooth activation function for deep networks using smoothing maximum technique
Koushik Biswas
Sandeep Kumar
Shilpak Banerjee
A. Pandey
28
31
0
08 Nov 2021
Towards Practical Adam: Non-Convexity, Convergence Theory, and
  Mini-Batch Acceleration
Towards Practical Adam: Non-Convexity, Convergence Theory, and Mini-Batch Acceleration
Congliang Chen
Li Shen
Fangyu Zou
Wei Liu
33
26
0
14 Jan 2021
1