ANITA: An Optimal Loopless Accelerated Variance-Reduced Gradient Method
We propose a novel accelerated variance-reduced gradient method called ANITA for finite-sum optimization. In this paper, we consider both general convex and strongly convex settings. In the general convex setting, ANITA achieves the convergence result , which improves the previous best result given by Varag (Lan et al., 2019). In particular, for a very wide range of , i.e., , where is the error tolerance and is the number of data samples, ANITA can achieve the optimal convergence result matching the lower bound provided by Woodworth and Srebro (2016). To the best of our knowledge, ANITA is the \emph{first} accelerated algorithm which can \emph{exactly} achieve this optimal result for general convex finite-sum problems. In the strongly convex setting, we also show that ANITA can achieve the optimal convergence result matching the lower bound provided by Lan and Zhou (2015). Moreover, ANITA enjoys a simpler loopless algorithmic structure unlike previous accelerated algorithms such as Katyusha (Allen-Zhu, 2017) and Varag (Lan et al., 2019) where they use an inconvenient double-loop structure. Finally, the experimental results also show that ANITA converges faster than previous state-of-the-art Varag (Lan et al., 2019), validating our theoretical results and confirming the practical superiority of ANITA.
View on arXiv