32
0

QuantX: A Framework for Hardware-Aware Quantization of Generative AI Workloads

Abstract

We present QuantX: a tailored suite of recipes for LLM and VLM quantization. It is capable of quantizing down to 3-bit resolutions with minimal loss in performance. The quantization strategies in QuantX take into account hardware-specific constraints to achieve efficient dequantization during inference ensuring flexible trade-off between runtime speed, memory requirement and model accuracy. Our results demonstrate that QuantX achieves performance within 6% of the unquantized model for LlaVa-v1.6 quantized down to 3-bits for multiple end user tasks and outperforms recently published state-of-the-art quantization techniques. This manuscript provides insights into the LLM quantization process that motivated the range of recipes and options that are incorporated in QuantX.

View on arXiv
@article{mazher2025_2505.07531,
  title={ QuantX: A Framework for Hardware-Aware Quantization of Generative AI Workloads },
  author={ Khurram Mazher and Saad Bin Nasir },
  journal={arXiv preprint arXiv:2505.07531},
  year={ 2025 }
}
Comments on this paper