52

Training with Fewer Bits: Unlocking Edge LLMs Training with Stochastic Rounding

Conference on Empirical Methods in Natural Language Processing (EMNLP), 2025
Main:9 Pages
6 Figures
Bibliography:2 Pages
3 Tables
Appendix:5 Pages
Abstract

LLM training is resource-intensive. Quantized training improves computational and memory efficiency but introduces quantization noise, which can hinder convergence and degrade model accuracy. Stochastic Rounding (SR) has emerged as a theoretically attractive alternative to deterministic rounding, offering unbiased gradient estimates. However, its interaction with other training factors -- especially batch size -- remains under explored. In this paper, we present a theoretical and empirical study of mini-batch stochastic gradient descent (SGD) with SR, showing that increased batch sizes can compensate for reduced precision during back-propagation. Furthermore, we show that quantizing weights and activations impacts gradient variance in distinct ways. Our experiments validate these theoretical insights.

View on arXiv
Comments on this paper