ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2307.11007
  4. Cited By
Sharpness Minimization Algorithms Do Not Only Minimize Sharpness To
  Achieve Better Generalization

Sharpness Minimization Algorithms Do Not Only Minimize Sharpness To Achieve Better Generalization

20 July 2023
Kaiyue Wen
Zhiyuan Li
Tengyu Ma
    FAtt
ArXivPDFHTML

Papers citing "Sharpness Minimization Algorithms Do Not Only Minimize Sharpness To Achieve Better Generalization"

26 / 26 papers shown
Title
SSE-SAM: Balancing Head and Tail Classes Gradually through Stage-Wise
  SAM
SSE-SAM: Balancing Head and Tail Classes Gradually through Stage-Wise SAM
Xingyu Lyu
Qianqian Xu
Zhiyong Yang
Shaojie Lyu
Qingming Huang
61
0
0
18 Dec 2024
Towards Understanding the Role of Sharpness-Aware Minimization
  Algorithms for Out-of-Distribution Generalization
Towards Understanding the Role of Sharpness-Aware Minimization Algorithms for Out-of-Distribution Generalization
Samuel Schapiro
Han Zhao
71
0
0
06 Dec 2024
Reweighting Local Mimina with Tilted SAM
Reweighting Local Mimina with Tilted SAM
Tian Li
Tianyi Zhou
J. Bilmes
25
0
0
30 Oct 2024
Implicit Regularization of Sharpness-Aware Minimization for
  Scale-Invariant Problems
Implicit Regularization of Sharpness-Aware Minimization for Scale-Invariant Problems
Bingcong Li
Liang Zhang
Niao He
36
3
0
18 Oct 2024
Sharpness-Aware Minimization Efficiently Selects Flatter Minima Late in Training
Sharpness-Aware Minimization Efficiently Selects Flatter Minima Late in Training
Zhanpeng Zhou
Mingze Wang
Yuchen Mao
Bingrui Li
Junchi Yan
AAML
55
0
0
14 Oct 2024
Understanding Adversarially Robust Generalization via Weight-Curvature
  Index
Understanding Adversarially Robust Generalization via Weight-Curvature Index
Yuelin Xu
Xiao Zhang
AAML
20
0
0
10 Oct 2024
On the Trade-off between Flatness and Optimization in Distributed
  Learning
On the Trade-off between Flatness and Optimization in Distributed Learning
Ying Cao
Zhaoxian Wu
Kun Yuan
Ali H. Sayed
23
1
0
28 Jun 2024
Forget Sharpness: Perturbed Forgetting of Model Biases Within SAM
  Dynamics
Forget Sharpness: Perturbed Forgetting of Model Biases Within SAM Dynamics
Ankit Vani
Frederick Tung
Gabriel L. Oliveira
Hossein Sharifi-Noghabi
AAML
28
0
0
10 Jun 2024
A Universal Class of Sharpness-Aware Minimization Algorithms
A Universal Class of Sharpness-Aware Minimization Algorithms
B. Tahmasebi
Ashkan Soleymani
Dara Bahri
Stefanie Jegelka
P. Jaillet
AAML
39
2
0
06 Jun 2024
Improving Generalization and Convergence by Enhancing Implicit
  Regularization
Improving Generalization and Convergence by Enhancing Implicit Regularization
Mingze Wang
Haotian He
Jinbo Wang
Zilin Wang
Guanhua Huang
Feiyu Xiong
Zhiyu Li
E. Weinan
Lei Wu
29
6
0
31 May 2024
Sharpness-Aware Minimization Enhances Feature Quality via Balanced
  Learning
Sharpness-Aware Minimization Enhances Feature Quality via Balanced Learning
Jacob Mitchell Springer
Vaishnavh Nagarajan
Aditi Raghunathan
31
2
0
30 May 2024
Improving Generalization of Deep Neural Networks by Optimum Shifting
Improving Generalization of Deep Neural Networks by Optimum Shifting
Yuyan Zhou
Ye Li
Lei Feng
Sheng-Jun Huang
OOD
ODL
23
0
0
23 May 2024
Exploration is Harder than Prediction: Cryptographically Separating
  Reinforcement Learning from Supervised Learning
Exploration is Harder than Prediction: Cryptographically Separating Reinforcement Learning from Supervised Learning
Noah Golowich
Ankur Moitra
Dhruv Rohatgi
OffRL
17
4
0
04 Apr 2024
A PAC-Bayesian Link Between Generalisation and Flat Minima
A PAC-Bayesian Link Between Generalisation and Flat Minima
Maxime Haddouche
Paul Viallard
Umut Simsekli
Benjamin Guedj
22
3
0
13 Feb 2024
Strong convexity-guided hyper-parameter optimization for flatter losses
Strong convexity-guided hyper-parameter optimization for flatter losses
Rahul Yedida
Snehanshu Saha
13
0
0
07 Feb 2024
Stabilizing Sharpness-aware Minimization Through A Simple
  Renormalization Strategy
Stabilizing Sharpness-aware Minimization Through A Simple Renormalization Strategy
Chengli Tan
Jiangshe Zhang
Junmin Liu
Yicheng Wang
Yunda Hao
AAML
18
1
0
14 Jan 2024
Achieving Margin Maximization Exponentially Fast via Progressive Norm
  Rescaling
Achieving Margin Maximization Exponentially Fast via Progressive Norm Rescaling
Mingze Wang
Zeping Min
Lei Wu
17
3
0
24 Nov 2023
TRAM: Bridging Trust Regions and Sharpness Aware Minimization
TRAM: Bridging Trust Regions and Sharpness Aware Minimization
Tom Sherborne
Naomi Saphra
Pradeep Dasigi
Hao Peng
25
4
0
05 Oct 2023
A simple connection from loss flatness to compressed neural representations
A simple connection from loss flatness to compressed neural representations
Shirui Chen
Stefano Recanatesi
E. Shea-Brown
11
0
0
03 Oct 2023
How to escape sharp minima with random perturbations
How to escape sharp minima with random perturbations
Kwangjun Ahn
Ali Jadbabaie
S. Sra
ODL
19
6
0
25 May 2023
Going Further: Flatness at the Rescue of Early Stopping for Adversarial
  Example Transferability
Going Further: Flatness at the Rescue of Early Stopping for Adversarial Example Transferability
Martin Gubri
Maxime Cordy
Yves Le Traon
AAML
6
3
1
05 Apr 2023
The Dynamics of Sharpness-Aware Minimization: Bouncing Across Ravines
  and Drifting Towards Wide Minima
The Dynamics of Sharpness-Aware Minimization: Bouncing Across Ravines and Drifting Towards Wide Minima
Peter L. Bartlett
Philip M. Long
Olivier Bousquet
60
34
0
04 Oct 2022
Trajectory-dependent Generalization Bounds for Deep Neural Networks via
  Fractional Brownian Motion
Trajectory-dependent Generalization Bounds for Deep Neural Networks via Fractional Brownian Motion
Chengli Tan
Jiang Zhang
Junmin Liu
23
1
0
09 Jun 2022
Understanding Gradient Descent on Edge of Stability in Deep Learning
Understanding Gradient Descent on Edge of Stability in Deep Learning
Sanjeev Arora
Zhiyuan Li
A. Panigrahi
MLT
69
88
0
19 May 2022
What Happens after SGD Reaches Zero Loss? --A Mathematical Framework
What Happens after SGD Reaches Zero Loss? --A Mathematical Framework
Zhiyuan Li
Tianhao Wang
Sanjeev Arora
MLT
83
98
0
13 Oct 2021
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp
  Minima
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
273
2,696
0
15 Sep 2016
1