ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.08668
48
0

SSVQ: Unleashing the Potential of Vector Quantization with Sign-Splitting

11 March 2025
Shuaiting Li
Juncan Deng
Chenxuan Wang
Kedong Xu
Rongtao Deng
Hong Gu
Haibin Shen
Kejie Huang
    MQ
ArXivPDFHTML
Abstract

Vector Quantization (VQ) has emerged as a prominent weight compression technique, showcasing substantially lower quantization errors than uniform quantization across diverse models, particularly in extreme compression scenarios. However, its efficacy during fine-tuning is limited by the constraint of the compression format, where weight vectors assigned to the same codeword are restricted to updates in the same direction. Consequently, many quantized weights are compelled to move in directions contrary to their local gradient information. To mitigate this issue, we introduce a novel VQ paradigm, Sign-Splitting VQ (SSVQ), which decouples the sign bit of weights from the codebook. Our approach involves extracting the sign bits of uncompressed weights and performing clustering and compression on all-positive weights. We then introduce latent variables for the sign bit and jointly optimize both the signs and the codebook. Additionally, we implement a progressive freezing strategy for the learnable sign to ensure training stability. Extensive experiments on various modern models and tasks demonstrate that SSVQ achieves a significantly superior compression-accuracy trade-off compared to conventional VQ. Furthermore, we validate our algorithm on a hardware accelerator, showing that SSVQ achieves a 3×\times× speedup over the 8-bit compressed model by reducing memory access.

View on arXiv
@article{li2025_2503.08668,
  title={ SSVQ: Unleashing the Potential of Vector Quantization with Sign-Splitting },
  author={ Shuaiting Li and Juncan Deng and Chenxuan Wang and Kedong Xu and Rongtao Deng and Hong Gu and Haibin Shen and Kejie Huang },
  journal={arXiv preprint arXiv:2503.08668},
  year={ 2025 }
}
Comments on this paper