25
3

New logarithmic step size for stochastic gradient descent

Abstract

In this paper, we propose a novel warm restart technique using a new logarithmic step size for the stochastic gradient descent (SGD) approach. For smooth and non-convex functions, we establish an O(1T)O(\frac{1}{\sqrt{T}}) convergence rate for the SGD. We conduct a comprehensive implementation to demonstrate the efficiency of the newly proposed step size on the ~FashionMinst,~ CIFAR10, and CIFAR100 datasets. Moreover, we compare our results with nine other existing approaches and demonstrate that the new logarithmic step size improves test accuracy by 0.9%0.9\% for the CIFAR100 dataset when we utilize a convolutional neural network (CNN) model.

View on arXiv
Comments on this paper