6
0

The Disharmony between BN and ReLU Causes Gradient Explosion, but is Offset by the Correlation between Activations

Abstract

Deep neural networks, which employ batch normalization and ReLU-like activation functions, suffer from instability in the early stages of training due to the high gradient induced by temporal gradient explosion. In this study, we analyze the occurrence and mitigation of gradient explosion both theoretically and empirically, and discover that the correlation between activations plays a key role in preventing the gradient explosion from persisting throughout the training. Finally, based on our observations, we propose an improved adaptive learning rate algorithm to effectively control the training instability.

View on arXiv
Comments on this paper