9
0

An Analytical Characterization of Sloppiness in Neural Networks: Insights from Linear Models

Abstract

Recent experiments have shown that training trajectories of multiple deep neural networks with different architectures, optimization algorithms, hyper-parameter settings, and regularization methods evolve on a remarkably low-dimensional "hyper-ribbon-like" manifold in the space of probability distributions. Inspired by the similarities in the training trajectories of deep networks and linear networks, we analytically characterize this phenomenon for the latter. We show, using tools in dynamical systems theory, that the geometry of this low-dimensional manifold is controlled by (i) the decay rate of the eigenvalues of the input correlation matrix of the training data, (ii) the relative scale of the ground-truth output to the weights at the beginning of training, and (iii) the number of steps of gradient descent. By analytically computing and bounding the contributions of these quantities, we characterize phase boundaries of the region where hyper-ribbons are to be expected. We also extend our analysis to kernel machines and linear models that are trained with stochastic gradient descent.

View on arXiv
@article{mao2025_2505.08915,
  title={ An Analytical Characterization of Sloppiness in Neural Networks: Insights from Linear Models },
  author={ Jialin Mao and Itay Griniasty and Yan Sun and Mark K. Transtrum and James P. Sethna and Pratik Chaudhari },
  journal={arXiv preprint arXiv:2505.08915},
  year={ 2025 }
}
Comments on this paper