ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2306.11680
  4. Cited By
The Implicit Bias of Batch Normalization in Linear Models and Two-layer
  Linear Convolutional Neural Networks

The Implicit Bias of Batch Normalization in Linear Models and Two-layer Linear Convolutional Neural Networks

20 June 2023
Yuan Cao
Difan Zou
Yuan-Fang Li
Quanquan Gu
    MLT
ArXivPDFHTML

Papers citing "The Implicit Bias of Batch Normalization in Linear Models and Two-layer Linear Convolutional Neural Networks"

6 / 6 papers shown
Title
Gradient Descent Robustly Learns the Intrinsic Dimension of Data in Training Convolutional Neural Networks
Gradient Descent Robustly Learns the Intrinsic Dimension of Data in Training Convolutional Neural Networks
Chenyang Zhang
Peifeng Gao
Difan Zou
Yuan Cao
OOD
MLT
57
0
0
11 Apr 2025
Adversarial Vulnerability as a Consequence of On-Manifold Inseparibility
Adversarial Vulnerability as a Consequence of On-Manifold Inseparibility
Rajdeep Haldar
Yue Xing
Qifan Song
Guang Lin
23
0
0
09 Oct 2024
The Implicit Bias of Adam on Separable Data
The Implicit Bias of Adam on Separable Data
Chenyang Zhang
Difan Zou
Yuan Cao
AI4CE
32
1
0
15 Jun 2024
On the Benefits of Over-parameterization for Out-of-Distribution
  Generalization
On the Benefits of Over-parameterization for Out-of-Distribution Generalization
Yifan Hao
Yong Lin
Difan Zou
Tong Zhang
OODD
OOD
19
4
0
26 Mar 2024
On the Training Instability of Shuffling SGD with Batch Normalization
On the Training Instability of Shuffling SGD with Batch Normalization
David Wu
Chulhee Yun
S. Sra
16
4
0
24 Feb 2023
On Margin Maximization in Linear and ReLU Networks
On Margin Maximization in Linear and ReLU Networks
Gal Vardi
Ohad Shamir
Nathan Srebro
34
27
0
06 Oct 2021
1