Information-theoretic reduction of deep neural networks to linear models in the overparametrized proportional regime

We rigorously analyse fully-trained neural networks of arbitrary depth in the Bayesian optimal setting in the so-called proportional scaling regime where the number of training samples and width of the input and all inner layers diverge proportionally. We prove an information-theoretic equivalence between the Bayesian deep neural network model trained from data generated by a teacher with matching architecture, and a simpler model of optimal inference in a generalized linear model. This equivalence enables us to compute the optimal generalization error for deep neural networks in this regime. We thus prove the "deep Gaussian equivalence principle" conjectured in Cui et al. (2023) (arXiv:2302.00375). Our result highlights that in order to escape this "trivialisation" of deep neural networks (in the sense of reduction to a linear model) happening in the strongly overparametrized proportional regime, models trained from much more data have to be considered.
View on arXiv@article{camilli2025_2505.03577, title={ Information-theoretic reduction of deep neural networks to linear models in the overparametrized proportional regime }, author={ Francesco Camilli and Daria Tieplova and Eleonora Bergamin and Jean Barbier }, journal={arXiv preprint arXiv:2505.03577}, year={ 2025 } }