507
v1v2v3 (latest)

On the distance between two neural networks and the stability of learning

Neural Information Processing Systems (NeurIPS), 2020
Abstract

This paper relates parameter distance to gradient breakdown for a broad class of nonlinear compositional functions. The analysis leads to a new distance function called deep relative trust and a descent lemma for neural networks. Since the resulting learning rule seems to require little to no learning rate tuning, it may unlock a simpler workflow for training deeper and more complex neural networks. The Python code used in this paper is here: https://github.com/jxbz/fromage.

View on arXiv
Comments on this paper