Step size adaptation in first-order method for stochastic strongly
convex programming
- ODL
Abstract
We propose a first-order method for stochastic strongly convex optimization that attains rate of convergence, analysis show that the proposed method is simple, easily to implement, and in worst case, asymptotically four times faster than its peers. We derive this method from several intuitive observations that are generalized from existing first order optimization methods.
View on arXivComments on this paper
