Investigating the locality of neural network training dynamics
In the recent past a certain property of neural training trajectories in weight-space had been isolated, that of "local elasticity" () - which attempts to quantify the propagation of influence of a sampled data point on the prediction at another data point. In this work, we embark on a comprehensive study of local elasticity. Firstly, specific to the classification setting, we suggest a new definition of the original idea of . Via experiments on state-of-the-art neural networks training on SVHN, CIFAR-10 and CIFAR-100 we demonstrate how our new detects the property of the weight updates preferring to make changes in predictions within the same class as of the sampled data. Next, we demonstrate via examples of neural regression that the original reveals a phase behavior: that their training proceeds via an initial elastic phase when changes rapidly and an eventual inelastic phase when remains large. Lastly, we give multiple examples of learning via gradient flows for which one can get a closed-form expression of the original function. By studying the plots of these derived formulas we give theoretical demonstrations of some of the experimentally detected properties of in the regression setting.
View on arXiv