58
1

Models of Heavy-Tailed Mechanistic Universality

Main:9 Pages
4 Figures
Bibliography:6 Pages
3 Tables
Appendix:25 Pages
Abstract

Recent theoretical and empirical successes in deep learning, including the celebrated neural scaling laws, are punctuated by the observation that many objects of interest tend to exhibit some form of heavy-tailed or power law behavior. In particular, the prevalence of heavy-tailed spectral densities in Jacobians, Hessians, and weight matrices has led to the introduction of the concept of heavy-tailed mechanistic universality (HT-MU). Multiple lines of empirical evidence suggest a robust correlation between heavy-tailed metrics and model performance, indicating that HT-MU may be a fundamental aspect of deep learning efficacy. Here, we propose a general family of random matrix models -- the high-temperature Marchenko-Pastur (HTMP) ensemble -- to explore attributes that give rise to heavy-tailed behavior in trained neural networks. Under this model, spectral densities with power laws on (upper and lower) tails arise through a combination of three independent factors (complex correlation structures in the data; reduced temperatures during training; and reduced eigenvector entropy), appearing as an implicit bias in the model structure, and they can be controlled with an "eigenvalue repulsion" parameter. Implications of our model on other appearances of heavy tails, including neural scaling laws, optimizer trajectories, and the five-plus-one phases of neural network training, are discussed.

View on arXiv
@article{hodgkinson2025_2506.03470,
  title={ Models of Heavy-Tailed Mechanistic Universality },
  author={ Liam Hodgkinson and Zhichao Wang and Michael W. Mahoney },
  journal={arXiv preprint arXiv:2506.03470},
  year={ 2025 }
}
Comments on this paper