16
0

Pushing the Limits of Low-Bit Optimizers: A Focus on EMA Dynamics

Abstract

The explosion in model sizes leads to continued growth in prohibitive training/fine-tuning costs, particularly for stateful optimizers which maintain auxiliary information of even 2x the model size to achieve optimal convergence. We therefore present in this work a novel type of optimizer that carries with extremely lightweight state overloads, achieved through ultra-low-precision quantization. While previous efforts have achieved certain success with 8-bit or 4-bit quantization, our approach enables optimizers to operate at precision as low as 3 bits, or even 2 bits per state element. This is accomplished by identifying and addressing two critical challenges: the signal swamping problem in unsigned quantization that results in unchanged state dynamics, and the rapidly increased gradient variance in signed quantization that leads to incorrect descent directions. The theoretical analysis suggests a tailored logarithmic quantization for the former and a precision-specific momentum value for the latter. Consequently, the proposed SOLO achieves substantial memory savings (approximately 45 GB when training a 7B model) with minimal accuracy loss. We hope that SOLO can contribute to overcoming the bottleneck in computational resources, thereby promoting greater accessibility in fundamental research.

View on arXiv
@article{xu2025_2505.00347,
  title={ Pushing the Limits of Low-Bit Optimizers: A Focus on EMA Dynamics },
  author={ Cong Xu and Wenbin Liang and Mo Yu and Anan Liu and Ke-Yue Zhang and Lizhuang Ma and Jianyong Wang and Jun Wang and Wei Zhang },
  journal={arXiv preprint arXiv:2505.00347},
  year={ 2025 }
}
Comments on this paper