36

SAdam: A Variant of Adam for Strongly Convex Functions

International Conference on Learning Representations (ICLR), 2019
Guanghui Wang
Shiyin Lu
Weiwei Tu
Lijun Zhang
Abstract

The Adam algorithm has become extremely popular for large-scale machine learning. Under convexity condition, it has been proved to enjoy a data-dependant O(T)O(\sqrt{T}) regret bound where TT is the time horizon. However, whether strong convexity can be utilized to further improve the performance remains an open problem. In this paper, we give an affirmative answer by developing a variant of Adam (referred to as SAdam) which achieves a data-dependant O(logT)O(\log T) regret bound for strongly convex functions. The essential idea is to maintain a faster decaying yet under controlled step size for exploiting strong convexity. In addition, under a special configuration of hyperparameters, our SAdam reduces to SC-RMSprop, a recently proposed variant of RMSprop for strongly convex functions, for which we provide the first data-dependent logarithmic regret bound. Empirical results on optimizing strongly convex functions and training deep networks demonstrate the effectiveness of our method.

View on arXiv
Comments on this paper