245

Logarithmic landscape and power-law escape rate of SGD

International Conference on Machine Learning (ICML), 2021
Abstract

Stochastic gradient descent (SGD) undergoes complicated multiplicative noise for the mean-square loss. We use this property of the SGD noise to derive a stochastic differential equation (SDE) with simpler additive noise by performing a non-uniform transformation of the time variable. In the SDE, the gradient of the loss is replaced by that of the logarithmized loss. Consequently, we show that, near a local or global minimum, the stationary distribution Pss(θ)P_\mathrm{ss}(\theta) of the network parameters θ\theta follows a power-law with respect to the loss function L(θ)L(\theta), i.e. Pss(θ)L(θ)ϕP_\mathrm{ss}(\theta)\propto L(\theta)^{-\phi} with the exponent ϕ\phi specified by the mini-batch size, the learning rate, and the Hessian at the minimum. We obtain the escape rate formula from a local minimum, which is determined not by the loss barrier height ΔL=L(θs)L(θ)\Delta L=L(\theta^s)-L(\theta^*) between a minimum θ\theta^* and a saddle θs\theta^s but by the logarithmized loss barrier height ΔlogL=log[L(θs)/L(θ)]\Delta\log L=\log[L(\theta^s)/L(\theta^*)]. Our escape-rate formula explains an empirical fact that SGD prefers flat minima with low effective dimensions.

View on arXiv
Comments on this paper