23
1

Robust Weight Initialization for Tanh Neural Networks with Fixed Point Analysis

Abstract

As a neural network's depth increases, it can improve generalization performance. However, training deep networks is challenging due to gradient and signal propagation issues. To address these challenges, extensive theoretical research and various methods have been introduced. Despite these advances, effective weight initialization methods for tanh neural networks remain insufficiently investigated. This paper presents a novel weight initialization method for neural networks with tanh activation function. Based on an analysis of the fixed points of the function tanh(ax)\tanh(ax), the proposed method aims to determine values of aa that mitigate activation saturation. A series of experiments on various classification datasets and physics-informed neural networks demonstrates that the proposed method outperforms Xavier initialization methods~(with or without normalization) in terms of robustness across different network sizes, data efficiency, and convergence speed. Code is available atthis https URL

View on arXiv
@article{lee2025_2410.02242,
  title={ Robust Weight Initialization for Tanh Neural Networks with Fixed Point Analysis },
  author={ Hyunwoo Lee and Hayoung Choi and Hyunju Kim },
  journal={arXiv preprint arXiv:2410.02242},
  year={ 2025 }
}
Comments on this paper