ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2210.05177
  4. Cited By
Make Sharpness-Aware Minimization Stronger: A Sparsified Perturbation
  Approach

Make Sharpness-Aware Minimization Stronger: A Sparsified Perturbation Approach

11 October 2022
Peng Mi
Li Shen
Tianhe Ren
Yiyi Zhou
Xiaoshuai Sun
Rongrong Ji
Dacheng Tao
    AAML
ArXivPDFHTML

Papers citing "Make Sharpness-Aware Minimization Stronger: A Sparsified Perturbation Approach"

10 / 60 papers shown
Title
AdaSAM: Boosting Sharpness-Aware Minimization with Adaptive Learning
  Rate and Momentum for Training Deep Neural Networks
AdaSAM: Boosting Sharpness-Aware Minimization with Adaptive Learning Rate and Momentum for Training Deep Neural Networks
Hao Sun
Li Shen
Qihuang Zhong
Liang Ding
Shi-Yong Chen
Jingwei Sun
Jing Li
Guangzhong Sun
Dacheng Tao
41
31
0
01 Mar 2023
FedSpeed: Larger Local Interval, Less Communication Round, and Higher
  Generalization Accuracy
FedSpeed: Larger Local Interval, Less Communication Round, and Higher Generalization Accuracy
Yan Sun
Li Shen
Tiansheng Huang
Liang Ding
Dacheng Tao
FedML
29
51
0
21 Feb 2023
Improving the Model Consistency of Decentralized Federated Learning
Improving the Model Consistency of Decentralized Federated Learning
Yi Shi
Li Shen
Kang Wei
Yan Sun
Bo Yuan
Xueqian Wang
Dacheng Tao
FedML
23
51
0
08 Feb 2023
Efficient Generalization Improvement Guided by Random Weight
  Perturbation
Efficient Generalization Improvement Guided by Random Weight Perturbation
Tao Li
Wei Yan
Zehao Lei
Yingwen Wu
Kun Fang
Ming Yang
X. Huang
AAML
35
6
0
21 Nov 2022
Improving Sharpness-Aware Minimization with Fisher Mask for Better
  Generalization on Language Models
Improving Sharpness-Aware Minimization with Fisher Mask for Better Generalization on Language Models
Qihuang Zhong
Liang Ding
Li Shen
Peng Mi
Juhua Liu
Bo Du
Dacheng Tao
AAML
26
50
0
11 Oct 2022
Randomized Sharpness-Aware Training for Boosting Computational
  Efficiency in Deep Learning
Randomized Sharpness-Aware Training for Boosting Computational Efficiency in Deep Learning
Yang Zhao
Hao Zhang
Xiuyuan Hu
16
9
0
18 Mar 2022
Efficient Sharpness-aware Minimization for Improved Training of Neural
  Networks
Efficient Sharpness-aware Minimization for Improved Training of Neural Networks
Jiawei Du
Hanshu Yan
Jiashi Feng
Joey Tianyi Zhou
Liangli Zhen
Rick Siow Mong Goh
Vincent Y. F. Tan
AAML
102
132
0
07 Oct 2021
Towards Practical Adam: Non-Convexity, Convergence Theory, and
  Mini-Batch Acceleration
Towards Practical Adam: Non-Convexity, Convergence Theory, and Mini-Batch Acceleration
Congliang Chen
Li Shen
Fangyu Zou
Wei Liu
36
26
0
14 Jan 2021
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp
  Minima
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
273
2,878
0
15 Sep 2016
Improving neural networks by preventing co-adaptation of feature
  detectors
Improving neural networks by preventing co-adaptation of feature detectors
Geoffrey E. Hinton
Nitish Srivastava
A. Krizhevsky
Ilya Sutskever
Ruslan Salakhutdinov
VLM
243
7,620
0
03 Jul 2012
Previous
12