Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2201.08025
Cited By
Low-Pass Filtering SGD for Recovering Flat Optima in the Deep Learning Optimization Landscape
20 January 2022
Devansh Bisla
Jing Wang
A. Choromańska
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Low-Pass Filtering SGD for Recovering Flat Optima in the Deep Learning Optimization Landscape"
28 / 28 papers shown
Title
Understanding Flatness in Generative Models: Its Role and Benefits
Taehwan Lee
Kyeongkook Seo
Jaejun Yoo
Sung Whan Yoon
DiffM
53
0
0
14 Mar 2025
Flat-LoRA: Low-Rank Adaption over a Flat Loss Landscape
Tao Li
Zhengbao He
Yujun Li
Yasheng Wang
Lifeng Shang
X. Huang
49
0
0
22 Sep 2024
Enhancing Sharpness-Aware Minimization by Learning Perturbation Radius
Xuehao Wang
Weisen Jiang
Shuai Fu
Yu Zhang
AAML
42
0
0
15 Aug 2024
Enhancing Domain Adaptation through Prompt Gradient Alignment
Hoang Phan
Lam C. Tran
Quyen Tran
Trung Le
52
0
0
13 Jun 2024
Revisiting Random Weight Perturbation for Efficiently Improving Generalization
Tao Li
Qinghua Tao
Weihao Yan
Zehao Lei
Yingwen Wu
Kun Fang
M. He
Xiaolin Huang
AAML
26
5
0
30 Mar 2024
Friendly Sharpness-Aware Minimization
Tao Li
Pan Zhou
Zhengbao He
Xinwen Cheng
Xiaolin Huang
AAML
46
15
0
19 Mar 2024
GRAWA: Gradient-based Weighted Averaging for Distributed Training of Deep Learning Models
Tolga Dimlioglu
A. Choromańska
31
3
0
07 Mar 2024
Stabilizing Sharpness-aware Minimization Through A Simple Renormalization Strategy
Chengli Tan
Jiangshe Zhang
Junmin Liu
Yicheng Wang
Yunda Hao
AAML
26
1
0
14 Jan 2024
Selectivity Drives Productivity: Efficient Dataset Pruning for Enhanced Transfer Learning
Yihua Zhang
Yimeng Zhang
Aochuan Chen
Jinghan Jia
Jiancheng Liu
Gaowen Liu
Min-Fong Hong
Shiyu Chang
Sijia Liu
AAML
21
8
0
13 Oct 2023
Based on What We Can Control Artificial Neural Networks
Cheng Kang
Xujing Yao
10
0
0
09 Oct 2023
Entropy-MCMC: Sampling from Flat Basins with Ease
Bolian Li
Ruqi Zhang
25
5
0
09 Oct 2023
Decentralized SGD and Average-direction SAM are Asymptotically Equivalent
Tongtian Zhu
Fengxiang He
Kaixuan Chen
Mingli Song
Dacheng Tao
34
15
0
05 Jun 2023
An Adaptive Policy to Employ Sharpness-Aware Minimization
Weisen Jiang
Hansi Yang
Yu Zhang
James T. Kwok
AAML
79
31
0
28 Apr 2023
Going Further: Flatness at the Rescue of Early Stopping for Adversarial Example Transferability
Martin Gubri
Maxime Cordy
Yves Le Traon
AAML
13
3
1
05 Apr 2023
A Modern Look at the Relationship between Sharpness and Generalization
Maksym Andriushchenko
Francesco Croce
Maximilian Müller
Matthias Hein
Nicolas Flammarion
3DH
11
52
0
14 Feb 2023
Escaping Saddle Points for Effective Generalization on Class-Imbalanced Data
Harsh Rangwani
Sumukh K Aithal
Mayank Mishra
R. Venkatesh Babu
18
27
0
28 Dec 2022
Improving Generalization of Pre-trained Language Models via Stochastic Weight Averaging
Peng Lu
I. Kobyzev
Mehdi Rezagholizadeh
Ahmad Rashid
A. Ghodsi
Philippe Langlais
MoMe
28
11
0
12 Dec 2022
Efficient Generalization Improvement Guided by Random Weight Perturbation
Tao Li
Wei Yan
Zehao Lei
Yingwen Wu
Kun Fang
Ming Yang
X. Huang
AAML
35
6
0
21 Nov 2022
SAM as an Optimal Relaxation of Bayes
Thomas Möllenhoff
Mohammad Emtiyaz Khan
BDL
29
32
0
04 Oct 2022
Trainable Weight Averaging: Accelerating Training and Improving Generalization
Tao Li
Zhehao Huang
Yingwen Wu
Zhengbao He
Qinghua Tao
X. Huang
Chih-Jen Lin
MoMe
50
0
0
26 May 2022
Train Flat, Then Compress: Sharpness-Aware Minimization Learns More Compressible Models
Clara Na
Sanket Vaibhav Mehta
Emma Strubell
62
19
0
25 May 2022
Anticorrelated Noise Injection for Improved Generalization
Antonio Orvieto
Hans Kersting
F. Proske
Francis R. Bach
Aurélien Lucchi
53
44
0
06 Feb 2022
When Do Flat Minima Optimizers Work?
Jean Kaddour
Linqing Liu
Ricardo M. A. Silva
Matt J. Kusner
ODL
11
58
0
01 Feb 2022
SWAD: Domain Generalization by Seeking Flat Minima
Junbum Cha
Sanghyuk Chun
Kyungjae Lee
Han-Cheol Cho
Seunghyun Park
Yunsung Lee
Sungrae Park
MoMe
216
423
0
17 Feb 2021
Lifted Neural Networks
Armin Askari
Geoffrey Negiar
Rajiv Sambharya
L. Ghaoui
26
37
0
03 May 2018
Global optimality conditions for deep neural networks
Chulhee Yun
S. Sra
Ali Jadbabaie
118
117
0
08 Jul 2017
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
273
2,886
0
15 Sep 2016
The Loss Surfaces of Multilayer Networks
A. Choromańska
Mikael Henaff
Michaël Mathieu
Gerard Ben Arous
Yann LeCun
ODL
175
1,184
0
30 Nov 2014
1