ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2001.07384
  4. Cited By
Understanding Why Neural Networks Generalize Well Through GSNR of
  Parameters

Understanding Why Neural Networks Generalize Well Through GSNR of Parameters

21 January 2020
Jinlong Liu
Guo-qing Jiang
Yunzhi Bai
Ting Chen
Huayan Wang
    AI4CE
ArXivPDFHTML

Papers citing "Understanding Why Neural Networks Generalize Well Through GSNR of Parameters"

6 / 6 papers shown
Title
Wrong-of-Thought: An Integrated Reasoning Framework with
  Multi-Perspective Verification and Wrong Information
Wrong-of-Thought: An Integrated Reasoning Framework with Multi-Perspective Verification and Wrong Information
Yongheng Zhang
Qiguang Chen
Jingxuan Zhou
Peng Wang
Jiasheng Si
Jin Wang
Wenpeng Lu
Libo Qin
LRM
46
3
0
06 Oct 2024
Gradient Mask: Lateral Inhibition Mechanism Improves Performance in
  Artificial Neural Networks
Gradient Mask: Lateral Inhibition Mechanism Improves Performance in Artificial Neural Networks
Lei Jiang
Yongqing Liu
Shihai Xiao
Yansong Chua
28
0
0
14 Aug 2022
In Search of Probeable Generalization Measures
In Search of Probeable Generalization Measures
Jonathan Jaegerman
Khalil Damouni
M. M. Ankaralı
Konstantinos N. Plataniotis
19
2
0
23 Oct 2021
Analysis of Generalizability of Deep Neural Networks Based on the
  Complexity of Decision Boundary
Analysis of Generalizability of Deep Neural Networks Based on the Complexity of Decision Boundary
Shuyue Guan
Murray H. Loew
17
25
0
16 Sep 2020
The Break-Even Point on Optimization Trajectories of Deep Neural
  Networks
The Break-Even Point on Optimization Trajectories of Deep Neural Networks
Stanislaw Jastrzebski
Maciej Szymczak
Stanislav Fort
Devansh Arpit
Jacek Tabor
Kyunghyun Cho
Krzysztof J. Geras
42
154
0
21 Feb 2020
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp
  Minima
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
281
2,888
0
15 Sep 2016
1