45
43

SVD-LLM: Truncation-aware Singular Value Decomposition for Large Language Model Compression

Abstract

The advancements in Large Language Models (LLMs) have been hindered by their substantial sizes, which necessitates LLM compression methods for practical deployment. Singular Value Decomposition (SVD) offers a promising solution for LLM compression. However, state-of-the-art SVD-based LLM compression methods have two key limitations: truncating smaller singular values may lead to higher compression loss, and the lack of update on the compressed weights after SVD truncation. In this work, we propose SVD-LLM, a SVD-based post-training LLM compression method that addresses the limitations of existing methods. SVD-LLM incorporates a truncation-aware data whitening technique to ensure a direct mapping between singular values and compression loss. Moreover, SVD-LLM adopts a parameter update with sequential low-rank approximation to compensate for the accuracy degradation after SVD compression. We evaluate SVD-LLM on 10 datasets and seven models from three different LLM families at three different scales. Our results demonstrate the superiority of SVD-LLM over state-of-the-arts, especially at high model compression ratios. Our code is available atthis https URL

View on arXiv
@article{wang2025_2403.07378,
  title={ SVD-LLM: Truncation-aware Singular Value Decomposition for Large Language Model Compression },
  author={ Xin Wang and Yu Zheng and Zhongwei Wan and Mi Zhang },
  journal={arXiv preprint arXiv:2403.07378},
  year={ 2025 }
}
Comments on this paper