The dynamics of Deep Linear Networks (DLNs) is dramatically affected by the variance of the parameters at initialization . For DLNs of width , we show a phase transition w.r.t. the scaling of the variance as : for large variance (), is very close to a global minimum but far from any saddle point, and for small variance (), is close to a saddle point and far from any global minimum. While the first case corresponds to the well-studied NTK regime, the second case is less understood. This motivates the study of the case , where we conjecture a Saddle-to-Saddle dynamics: throughout training, gradient descent visits the neighborhoods of a sequence of saddles, each corresponding to linear maps of increasing rank, until reaching a sparse global minimum. We support this conjecture with a theorem for the dynamics between the first two saddles, as well as some numerical experiments.
View on arXiv