405
v1v2v3 (latest)

Convergence of Sign-based Random Reshuffling Algorithms for Nonconvex Optimization

Main:32 Pages
6 Figures
Bibliography:5 Pages
Abstract

signSGD is popular in nonconvex optimization due to its communication efficiency. Yet, existing analyses typically assume data are sampled with replacement in each iteration, contradicting a common practical implementation where data are randomly reshuffled and sequentially fed into the algorithm. This gap leaves the theoretical understanding of the more practical algorithm, signSGD with random reshuffling (SignRR), largely unexplored. We develop the first analysis of SignRR to identify the core technical challenge that prevents a thorough convergence analysis of this method. In particular, given a dataset of size nn and TT epochs, we show that the expected gradient norm of SignRR is upper bounded by O(log(nT)/nT+σ)O(\log(nT)/\sqrt{nT} + \sigma), where σ\sigma is the averaged conditional mean square error that may not vanish. To tackle this limitation, we develop two new sign-based algorithms under random reshuffling: SignRVR, which incorporates variance-reduced gradients, and SignRVM, which integrates momentum-based updates. Both algorithms achieve a faster convergence rate of O(log(nT)/nT+log(nT)n/T){O}(\log(nT)/\sqrt{nT} +\log(nT)\sqrt{n}/\sqrt{T}). We further extend our algorithms to a distributed setting, with a convergence rate of O(log(n0T)/n0T+log(n0T)n0/T){O}(\log(n_0T)/\sqrt{n_0T} +\log (n_0T)\sqrt{n_0}/\sqrt{T}), where n0n_0 is the size of the dataset of a single machine. These results mark the first step towards the theoretical understanding of practical implementation of sign-based optimization algorithms. Finally, we back up our theoretical findings through experiments on simulated and real-world problems, verifying that randomly reshuffled sign methods match or surpass existing baselines.

View on arXiv
Comments on this paper