203

Convergence of Gradient Descent for Recurrent Neural Networks: A Nonasymptotic Analysis

Abstract

We analyze recurrent neural networks trained with gradient descent in the supervised learning setting for dynamical systems, and prove that gradient descent can achieve optimality \emph{without} massive overparameterization. Our in-depth nonasymptotic analysis (i) provides sharp bounds on the network size mm and iteration complexity τ\tau in terms of the sequence length TT, sample size nn and ambient dimension dd, and (ii) identifies the significant impact of long-term dependencies in the dynamical system on the convergence and network width bounds characterized by a cutoff point that depends on the Lipschitz continuity of the activation function. Remarkably, this analysis reveals that an appropriately-initialized recurrent neural network trained with nn samples can achieve optimality with a network size mm that scales only logarithmically with nn. This sharply contrasts with the prior works that require high-order polynomial dependency of mm on nn to establish strong regularity conditions. Our results are based on an explicit characterization of the class of dynamical systems that can be approximated and learned by recurrent neural networks via norm-constrained transportation mappings, and establishing local smoothness properties of the hidden state with respect to the learnable parameters.

View on arXiv
Comments on this paper