Empirical studies reported that the Hessian matrix of neural networks (NNs) exhibits a near-block-diagonal structure, yet its theoretical foundation remains unclear. In this work, we reveal two forces that shape the Hessian structure: a ``static force'' rooted in the architecture design, and a ``dynamic force'' arisen from training. We then provide a rigorous theoretical analysis of ``static force'' at random initialization. We study linear models and 1-hidden-layer networks with the mean-square (MSE) loss and the Cross-Entropy (CE) loss for classification tasks. By leveraging random matrix theory, we compare the limit distributions of the diagonal and off-diagonal Hessian blocks and find that the block-diagonal structure arises as , where denotes the number of classes. Our findings reveal that is a primary driver of the near-block-diagonal structure. These results may shed new light on the Hessian structure of large language models (LLMs), which typically operate with a large exceeding or .
View on arXiv@article{dong2025_2505.02809, title={ Towards Quantifying the Hessian Structure of Neural Networks }, author={ Zhaorui Dong and Yushun Zhang and Zhi-Quan Luo and Jianfeng Yao and Ruoyu Sun }, journal={arXiv preprint arXiv:2505.02809}, year={ 2025 } }