16
1

Enhancing Stochastic Gradient Descent: A Unified Framework and Novel Acceleration Methods for Faster Convergence

Abstract

Based on SGD, previous works have proposed many algorithms that have improved convergence speed and generalization in stochastic optimization, such as SGDm, AdaGrad, Adam, etc. However, their convergence analysis under non-convex conditions is challenging. In this work, we propose a unified framework to address this issue. For any first-order methods, we interpret the updated direction gtg_t as the sum of the stochastic subgradient ft(xt)\nabla f_t(x_t) and an additional acceleration term 2vt,ft(xt)vt22vt\frac{2|\langle v_t, \nabla f_t(x_t) \rangle|}{\|v_t\|_2^2} v_t, thus we can discuss the convergence by analyzing vt,ft(xt)\langle v_t, \nabla f_t(x_t) \rangle. Through our framework, we have discovered two plug-and-play acceleration methods: \textbf{Reject Accelerating} and \textbf{Random Vector Accelerating}, we theoretically demonstrate that these two methods can directly lead to an improvement in convergence rate.

View on arXiv
Comments on this paper