LSAM: Asynchronous Distributed Training with Landscape-Smoothed Sharpness-Aware Minimization

Main:6 Pages
7 Figures
Bibliography:4 Pages
2 Tables
Appendix:11 Pages
Abstract
While Sharpness-Aware Minimization (SAM) improves generalization in deep neural networks by minimizing both loss and sharpness, it suffers from inefficiency in distributed large-batch training. We present Landscape-Smoothed SAM (LSAM), a novel optimizer that preserves SAM's generalization advantages while offering superior efficiency. LSAM integrates SAM's adversarial steps with an asynchronous distributed sampling strategy, generating an asynchronous distributed sampling scheme, producing a smoothed sharpness-aware loss landscape for optimization. This design eliminates synchronization bottlenecks, accelerates large-batch convergence, and delivers higher final accuracy compared to data-parallel SAM.
View on arXivComments on this paper
