Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2006.06878
Cited By
Optimization Theory for ReLU Neural Networks Trained with Normalization Layers
11 June 2020
Yonatan Dukler
Quanquan Gu
Guido Montúfar
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Optimization Theory for ReLU Neural Networks Trained with Normalization Layers"
9 / 9 papers shown
Title
Feature Normalization Prevents Collapse of Non-contrastive Learning Dynamics
Han Bao
SSL
MLT
97
1
0
28 Sep 2023
The Implicit Bias of Batch Normalization in Linear Models and Two-layer Linear Convolutional Neural Networks
Yuan Cao
Difan Zou
Yuan-Fang Li
Quanquan Gu
MLT
102
5
0
20 Jun 2023
Understanding the Generalization Benefit of Normalization Layers: Sharpness Reduction
Kaifeng Lyu
Zhiyuan Li
Sanjeev Arora
FAtt
121
75
0
14 Jun 2022
Implicit Bias of MSE Gradient Optimization in Underparameterized Neural Networks
Benjamin Bowman
Guido Montúfar
106
11
0
12 Jan 2022
Geometry of Linear Convolutional Networks
Kathlén Kohn
Thomas Merkh
Guido Montúfar
Matthew Trager
103
20
0
03 Aug 2021
FedBN: Federated Learning on Non-IID Features via Local Batch Normalization
Xiaoxiao Li
Meirui Jiang
Xiaofei Zhang
Michael Kamp
Qi Dou
OOD
FedML
287
830
0
15 Feb 2021
A Dynamical View on Optimization Algorithms of Overparameterized Neural Networks
Zhiqi Bu
Shiyun Xu
Kan Chen
61
18
0
25 Oct 2020
Inductive Bias of Gradient Descent for Weight Normalized Smooth Homogeneous Neural Nets
Depen Morwani
H. G. Ramaswamy
50
3
0
24 Oct 2020
Normalization Techniques in Training DNNs: Methodology, Analysis and Application
Lei Huang
Jie Qin
Yi Zhou
Fan Zhu
Li Liu
Ling Shao
AI4CE
176
272
0
27 Sep 2020
1