ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2108.01110
  4. Cited By
Batch Normalization Preconditioning for Neural Network Training

Batch Normalization Preconditioning for Neural Network Training

2 August 2021
Susanna Lange
Kyle E. Helfrich
Qiang Ye
ArXivPDFHTML

Papers citing "Batch Normalization Preconditioning for Neural Network Training"

4 / 4 papers shown
Title
Preconditioning for Accelerated Gradient Descent Optimization and
  Regularization
Preconditioning for Accelerated Gradient Descent Optimization and Regularization
Qiang Ye
AI4CE
21
0
0
30 Sep 2024
fKAN: Fractional Kolmogorov-Arnold Networks with trainable Jacobi basis
  functions
fKAN: Fractional Kolmogorov-Arnold Networks with trainable Jacobi basis functions
Alireza Afzal Aghaei
27
47
0
11 Jun 2024
Understanding the Generalization Benefit of Normalization Layers:
  Sharpness Reduction
Understanding the Generalization Benefit of Normalization Layers: Sharpness Reduction
Kaifeng Lyu
Zhiyuan Li
Sanjeev Arora
FAtt
35
69
0
14 Jun 2022
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp
  Minima
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
273
2,888
0
15 Sep 2016
1