50
v1v2 (latest)

Dispelling the Curse of Singularities in Neural Network Optimizations

Hengjie Cao
Mengyi Chen
Yifeng Yang
Fang Dong
Ruijun Huang
Anrui Chen
Jixian Zhou
Mingzhi Dong
Yujiang Wang
Dongsheng Li
Wenyi Fang
Yuanyi Lin
Fan Wu
Li Shang
Main:10 Pages
10 Figures
Bibliography:3 Pages
9 Tables
Appendix:22 Pages
Abstract

This work investigates the optimization instability of deep neural networks from a less-explored yet insightful perspective: the emergence and amplification of singularities in the parametric space. Our analysis reveals that parametric singularities inevitably grow with gradient updates and further intensify alignment with representations, leading to increased singularities in the representation space. We show that the gradient Frobenius norms are bounded by the top singular values of the weight matrices, and as training progresses, the mutually reinforcing growth of weight and representation singularities, termed the curse of singularities, relaxes these bounds, escalating the risk of sharp loss explosions. To counter this, we propose Parametric Singularity Smoothing (PSS), a lightweight, flexible, and effective method for smoothing the singular spectra of weight matrices. Extensive experiments across diverse datasets, architectures, and optimizers demonstrate that PSS mitigates instability, restores trainability even after failure, and improves both training efficiency and generalization.

View on arXiv
Comments on this paper