15
0

Grouped Sequency-arranged Rotation: Optimizing Rotation Transformation for Quantization for Free

Abstract

Large Language Models (LLMs) face deployment challenges due to high computational costs, and while Post-Training Quantization (PTQ) offers a solution, existing rotation-based methods struggle at very low bit-widths like 2-bit. We introduce a novel, training-free approach to construct an improved rotation matrix, addressing the limitations of current methods. The key contributions include leveraging the Walsh-Hadamard transform with sequency ordering, which clusters similar frequency components to reduce quantization error compared to standard Hadamard matrices, significantly improving performance. Furthermore, we propose a Grouped Sequency-arranged Rotation (GSR) using block-diagonal matrices with smaller Walsh blocks, effectively isolating outlier impacts and achieving performance comparable to optimization-based methods without requiring any training. Our method demonstrates robust performance on reasoning tasks and Perplexity (PPL) score on WikiText-2. Our method also enhances results even when applied over existing learned rotation techniques.

View on arXiv
@article{choi2025_2505.03810,
  title={ Grouped Sequency-arranged Rotation: Optimizing Rotation Transformation for Quantization for Free },
  author={ Euntae Choi and Sumin Song and Woosang Lim and Sungjoo Yoo },
  journal={arXiv preprint arXiv:2505.03810},
  year={ 2025 }
}
Comments on this paper