Logarithmic landscape and power-law escape rate of SGD
Stochastic gradient descent (SGD) undergoes complicated multiplicative noise for the mean-square loss. We use this property of the SGD noise to derive a stochastic differential equation (SDE) with simpler additive noise by performing a non-uniform transformation of the time variable. In the SDE, the gradient of the loss is replaced by that of the logarithmized loss. Consequently, we show that, near a local or global minimum, the stationary distribution of the network parameters follows a power-law with respect to the loss function , i.e. with the exponent specified by the mini-batch size, the learning rate, and the Hessian at the minimum. We obtain the escape rate formula from a local minimum, which is determined not by the loss barrier height between a minimum and a saddle but by the logarithmized loss barrier height . Our escape-rate formula explains an empirical fact that SGD prefers flat minima with low effective dimensions.
View on arXiv