ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1805.07898
  4. Cited By
SmoothOut: Smoothing Out Sharp Minima to Improve Generalization in Deep
  Learning

SmoothOut: Smoothing Out Sharp Minima to Improve Generalization in Deep Learning

21 May 2018
W. Wen
Yandan Wang
Feng Yan
Cong Xu
Chunpeng Wu
Yiran Chen
H. Li
ArXivPDFHTML

Papers citing "SmoothOut: Smoothing Out Sharp Minima to Improve Generalization in Deep Learning"

6 / 6 papers shown
Title
Layer-wise Adaptive Gradient Norm Penalizing Method for Efficient and Accurate Deep Learning
Layer-wise Adaptive Gradient Norm Penalizing Method for Efficient and Accurate Deep Learning
Sunwoo Lee
98
0
0
18 Mar 2025
Scalable Back-Propagation-Free Training of Optical Physics-Informed Neural Networks
Scalable Back-Propagation-Free Training of Optical Physics-Informed Neural Networks
Yequan Zhao
Xinling Yu
Xian Xiao
Z. Chen
Z. Liu
G. Kurczveil
R. Beausoleil
S. Liu
Z. Zhang
49
0
0
17 Feb 2025
Towards Understanding Sharpness-Aware Minimization
Towards Understanding Sharpness-Aware Minimization
Maksym Andriushchenko
Nicolas Flammarion
AAML
24
133
0
13 Jun 2022
Tackling benign nonconvexity with smoothing and stochastic gradients
Tackling benign nonconvexity with smoothing and stochastic gradients
Harsh Vardhan
Sebastian U. Stich
16
8
0
18 Feb 2022
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp
  Minima
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
273
2,888
0
15 Sep 2016
Dropout as a Bayesian Approximation: Representing Model Uncertainty in
  Deep Learning
Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning
Y. Gal
Zoubin Ghahramani
UQCV
BDL
261
9,134
0
06 Jun 2015
1