In this note we give a simple proof for the convergence of stochastic gradient (SGD) methods on -convex functions under a (milder than standard) -smoothness assumption. We show that for carefully chosen stepsizes SGD converges after iterations as where measures the variance in the stochastic noise. For deterministic gradient descent (GD) and SGD in the interpolation setting we have and we recover the exponential convergence rate. The bound matches with the best known iteration complexity of GD and SGD, up to constants.
View on arXiv