33
0

KurTail : Kurtosis-based LLM Quantization

Abstract

One of the challenges of quantizing a large language model (LLM) is the presence of outliers. Outliers often make uniform quantization schemes less effective, particularly in extreme cases such as 4-bit quantization. We introduce KurTail, a new post-training quantization (PTQ) scheme that leverages Kurtosis-based rotation to mitigate outliers in the activations of LLMs. Our method optimizes Kurtosis as a measure of tailedness. This approach enables the quantization of weights, activations, and the KV cache in 4 bits. We utilize layer-wise optimization, ensuring memory efficiency. KurTail outperforms existing quantization methods, offering a 13.3\% boost in MMLU accuracy and a 15.5\% drop in Wiki perplexity compared to QuaRot. It also outperforms SpinQuant with a 2.6\% MMLU gain and reduces perplexity by 2.9\%, all while reducing the training cost. For comparison, learning the rotation using SpinQuant for Llama3-70B requires at least four NVIDIA H100 80GB GPUs, whereas our method requires only a single GPU, making it a more accessible solution for consumer GPU.

View on arXiv
@article{akhondzadeh2025_2503.01483,
  title={ KurTail : Kurtosis-based LLM Quantization },
  author={ Mohammad Sadegh Akhondzadeh and Aleksandar Bojchevski and Evangelos Eleftheriou and Martino Dazzi },
  journal={arXiv preprint arXiv:2503.01483},
  year={ 2025 }
}
Comments on this paper