302
v1v2 (latest)

Neural ODEs as the Deep Limit of ResNets with constant weights

Analysis and Applications (Anal. Appl.), 2019
Abstract

In this paper we prove that, in the deep limit, the stochastic gradient descent on a ResNet type deep neural network, where each layer shares the same weight matrix, converges to the stochastic gradient descent for a Neural ODE and that the corresponding value/loss functions converge. Our result gives, in the context of minimization by stochastic gradient descent, a theoretical foundation for considering Neural ODEs as the deep limit of ResNets. Our proof is based on certain decay estimates for associated Fokker-Planck equations.

View on arXiv
Comments on this paper