ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1903.02237
  4. Cited By
Positively Scale-Invariant Flatness of ReLU Neural Networks

Positively Scale-Invariant Flatness of ReLU Neural Networks

6 March 2019
Mingyang Yi
Qi Meng
Wei-neng Chen
Zhi-Ming Ma
Tie-Yan Liu
ArXiv (abs)PDFHTML

Papers citing "Positively Scale-Invariant Flatness of ReLU Neural Networks"

11 / 11 papers shown
Title
From Global to Local: A Scalable Benchmark for Local Posterior Sampling
From Global to Local: A Scalable Benchmark for Local Posterior Sampling
Rohan Hitchcock
Jesse Hoogland
118
1
0
29 Jul 2025
On the curvature of the loss landscape
On the curvature of the loss landscape
Alison Pouplin
Hrittik Roy
Sidak Pal Singh
Georgios Arvanitidis
134
2
0
10 Jul 2023
Local Identifiability of Deep ReLU Neural Networks: the Theory
Local Identifiability of Deep ReLU Neural Networks: the TheoryNeural Information Processing Systems (NeurIPS), 2022
Joachim Bona-Pellissier
Franccois Malgouyres
François Bachoc
FAtt
240
10
0
15 Jun 2022
Understanding the Generalization Benefit of Normalization Layers:
  Sharpness Reduction
Understanding the Generalization Benefit of Normalization Layers: Sharpness ReductionNeural Information Processing Systems (NeurIPS), 2022
Kaifeng Lyu
Zhiyuan Li
Sanjeev Arora
FAtt
231
86
0
14 Jun 2022
On the Symmetries of Deep Learning Models and their Internal
  Representations
On the Symmetries of Deep Learning Models and their Internal RepresentationsNeural Information Processing Systems (NeurIPS), 2022
Charles Godfrey
Davis Brown
Tegan H. Emerson
Henry Kvinge
236
56
0
27 May 2022
An Embedding of ReLU Networks and an Analysis of their Identifiability
An Embedding of ReLU Networks and an Analysis of their IdentifiabilityConstructive approximation (Constr. Approx.), 2021
Pierre Stock
Rémi Gribonval
257
23
0
20 Jul 2021
ASAM: Adaptive Sharpness-Aware Minimization for Scale-Invariant Learning
  of Deep Neural Networks
ASAM: Adaptive Sharpness-Aware Minimization for Scale-Invariant Learning of Deep Neural NetworksInternational Conference on Machine Learning (ICML), 2021
Jungmin Kwon
Jeongseop Kim
Hyunseong Park
I. Choi
292
348
0
23 Feb 2021
Understanding Decoupled and Early Weight Decay
Understanding Decoupled and Early Weight DecayAAAI Conference on Artificial Intelligence (AAAI), 2020
Johan Bjorck
Kilian Q. Weinberger
Daniel Schwalbe-Koda
102
31
0
27 Dec 2020
The Representation Theory of Neural Networks
The Representation Theory of Neural Networks
M. Armenta
Pierre-Marc Jodoin
309
35
0
23 Jul 2020
Optimization for deep learning: theory and algorithms
Optimization for deep learning: theory and algorithms
Tian Ding
ODL
235
177
0
19 Dec 2019
Hessian based analysis of SGD for Deep Nets: Dynamics and Generalization
Hessian based analysis of SGD for Deep Nets: Dynamics and GeneralizationSDM (SDM), 2019
Xinyan Li
Qilong Gu
Yingxue Zhou
Tiancong Chen
A. Banerjee
ODL
172
55
0
24 Jul 2019
1