In recent years, the compression of large language models (LLMs) has emerged as a key problem in facilitating LLM deployment on resource-limited devices, reducing compute costs, and mitigating the environmental footprint due to large-scale AI infrastructure. Here, we establish the foundations of LLM quantization from a rate-distortion theory perspective and propose a quantization technique based on simple rate-distortion optimization. Our technique scales to models containing hundreds of billions of weight parameters and offers users the flexibility to compress models, post-training, to a model size or accuracy specified by the user.
View on arXiv@article{young2025_2505.03031, title={ Radio: Rate-Distortion Optimization for Large Language Model Compression }, author={ Sean I. Young }, journal={arXiv preprint arXiv:2505.03031}, year={ 2025 } }