231
v1v2 (latest)

Power-law escape rate of SGD

International Conference on Machine Learning (ICML), 2021
Abstract

Stochastic gradient descent (SGD) undergoes complicated multiplicative noise for the mean-square loss. We use this property of SGD noise to derive a stochastic differential equation (SDE) with simpler additive noise by performing a random time change. Using this formalism, we show that the log loss barrier ΔlogL=log[L(θs)/L(θ)]\Delta\log L=\log[L(\theta^s)/L(\theta^*)] between a local minimum θ\theta^* and a saddle θs\theta^s determines the escape rate of SGD from the local minimum, contrary to the previous results borrowing from physics that the linear loss barrier ΔL=L(θs)L(θ)\Delta L=L(\theta^s)-L(\theta^*) decides the escape rate. Our escape-rate formula strongly depends on the typical magnitude hh^* and the number nn of the outlier eigenvalues of the Hessian. This result explains an empirical fact that SGD prefers flat minima with low effective dimensions, giving an insight into implicit biases of SGD.

View on arXiv
Comments on this paper