-Stable convergence of heavy-tailed infinitely-wide neural networks

We consider infinitely-wide multi-layer perceptrons (MLPs) which are limits of standard deep feed-forward neural networks. We assume that, for each layer, the weights of an MLP are initialized with i.i.d. samples from either a light-tailed (finite variance) or heavy-tailed distribution in the domain of attraction of a symmetric -stable distribution, where may depend on the layer. For the bias terms of the layer, we assume i.i.d. initializations with a symmetric -stable distribution having the same parameter of that layer. We then extend a recent result of Favaro, Fortini, and Peluchetti (2020), to show that the vector of pre-activation values at all nodes of a given hidden layer converges in the limit, under a suitable scaling, to a vector of i.i.d. random variables with symmetric -stable distributions.
View on arXiv