94

The Hidden Width of Deep ResNets: Tight Error Bounds and Phase Diagrams

Main:38 Pages
6 Figures
Bibliography:3 Pages
Abstract

We study the gradient-based training of large-depth residual networks (ResNets) from standard random initializations. We show that with a diverging depth LL, a fixed embedding dimension DD, and an arbitrary hidden width MM, the training dynamics converges to a Neural Mean ODE training dynamics. Remarkably, the limit is independent of the scaling of MM, covering practical cases of, say, Transformers, where MM (the number of hidden units or attention heads per layer) is typically of the order of DD. For a residual scale ΘD(αLM)\Theta_D\big(\frac{\alpha}{LM}\big), we obtain the error bound OD(1L+αLM)O_D\big(\frac{1}{L}+ \frac{\alpha}{\sqrt{LM}}\big) between the model's output and its limit after a fixed number gradient of steps, and we verify empirically that this rate is tight. When α=Θ(1)\alpha=\Theta(1), the limit exhibits complete feature learning, i.e. the Mean ODE is genuinely non-linearly parameterized. In contrast, we show that α\alpha \to \infty yields a \lazy ODE regime where the Mean ODE is linearly parameterized. We then focus on the particular case of ResNets with two-layer perceptron blocks, for which we study how these scalings depend on the embedding dimension DD. We show that for this model, the only residual scale that leads to complete feature learning is Θ(DLM)\Theta\big(\frac{\sqrt{D}}{LM}\big). In this regime, we prove the error bound O(1L+DLM)O\big(\frac{1}{L}+ \frac{\sqrt{D}}{\sqrt{LM}}\big) between the ResNet and its limit after a fixed number of gradient steps, which is also empirically tight. Our convergence results rely on a novel mathematical perspective on ResNets : (i) due to the randomness of the initialization, the forward and backward pass through the ResNet behave as the stochastic approximation of certain mean ODEs, and (ii) by propagation of chaos (that is, asymptotic independence of the units) this behavior is preserved through the training dynamics.

View on arXiv
Comments on this paper