8

ConvRot: Rotation-Based Plug-and-Play 4-bit Quantization for Diffusion Transformers

Feice Huang
Zuliang Han
Xing Zhou
Yihuang Chen
Lifei Zhu
Haoqian Wang
Main:7 Pages
11 Figures
Bibliography:3 Pages
8 Tables
Appendix:5 Pages
Abstract

Diffusion transformers have demonstrated strong capabilities in generating high-quality images. However, as model size increases, the growing memory footprint and inference latency pose significant challenges for practical deployment. Recent studies in large language models (LLMs) show that rotation-based techniques can smooth outliers and enable 4-bit quantization, but these approaches often incur substantial overhead and struggle with row-wise outliers in diffusion transformers. To address these challenges, we propose ConvRot, a group-wise rotation-based quantization method that leverages regular Hadamard transform (RHT) to suppress both row-wise and column-wise outliers while reducing complexity from quadratic to linear. Building on this, we design ConvLinear4bit, a plug-and-play module that integrates rotation, quantization, GEMM, and dequantization, enabling W4A4 inference without retraining and preserving visual quality. Experiments on FLUX.1-dev demonstrate a 2.26×\times speedup and 4.05×\times memory reduction while maintaining image fidelity. To our knowledge, this is the first application of rotation-based quantization for plug-and-play W4A4 inference in diffusion transformers.

View on arXiv
Comments on this paper