ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2405.18890
  4. Cited By
Locally Estimated Global Perturbations are Better than Local
  Perturbations for Federated Sharpness-aware Minimization

Locally Estimated Global Perturbations are Better than Local Perturbations for Federated Sharpness-aware Minimization

29 May 2024
Ziqing Fan
Shengchao Hu
Jiangchao Yao
Gang Niu
Ya-Qin Zhang
Masashi Sugiyama
Yanfeng Wang
    FedML
ArXivPDFHTML

Papers citing "Locally Estimated Global Perturbations are Better than Local Perturbations for Federated Sharpness-aware Minimization"

5 / 5 papers shown
Title
A-FedPD: Aligning Dual-Drift is All Federated Primal-Dual Learning Needs
A-FedPD: Aligning Dual-Drift is All Federated Primal-Dual Learning Needs
Yan Sun
Li Shen
Dacheng Tao
FedML
18
0
0
27 Sep 2024
FedSpeed: Larger Local Interval, Less Communication Round, and Higher
  Generalization Accuracy
FedSpeed: Larger Local Interval, Less Communication Round, and Higher Generalization Accuracy
Yan Sun
Li Shen
Tiansheng Huang
Liang Ding
Dacheng Tao
FedML
29
51
0
21 Feb 2023
Efficient Sharpness-aware Minimization for Improved Training of Neural
  Networks
Efficient Sharpness-aware Minimization for Improved Training of Neural Networks
Jiawei Du
Hanshu Yan
Jiashi Feng
Joey Tianyi Zhou
Liangli Zhen
Rick Siow Mong Goh
Vincent Y. F. Tan
AAML
102
132
0
07 Oct 2021
Federated Learning on Non-IID Data Silos: An Experimental Study
Federated Learning on Non-IID Data Silos: An Experimental Study
Q. Li
Yiqun Diao
Quan Chen
Bingsheng He
FedML
OOD
85
943
0
03 Feb 2021
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp
  Minima
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
273
2,878
0
15 Sep 2016
1