16
1

Improve Generalization Ability of Deep Wide Residual Network with A Suitable Scaling Factor

Abstract

Deep Residual Neural Networks (ResNets) have demonstrated remarkable success across a wide range of real-world applications. In this paper, we identify a suitable scaling factor (denoted by α\alpha) on the residual branch of deep wide ResNets to achieve good generalization ability. We show that if α\alpha is a constant, the class of functions induced by Residual Neural Tangent Kernel (RNTK) is asymptotically not learnable, as the depth goes to infinity. We also highlight a surprising phenomenon: even if we allow α\alpha to decrease with increasing depth LL, the degeneration phenomenon may still occur. However, when α\alpha decreases rapidly with LL, the kernel regression with deep RNTK with early stopping can achieve the minimax rate provided that the target regression function falls in the reproducing kernel Hilbert space associated with the infinite-depth RNTK. Our simulation studies on synthetic data and real classification tasks such as MNIST, CIFAR10 and CIFAR100 support our theoretical criteria for choosing α\alpha.

View on arXiv
Comments on this paper