Diffusion models have revolutionized generative tasks, especially in the domain of text-to-image synthesis; however, their iterative denoising process demands substantial computational resources. In this paper, we present a novel acceleration strategy that integrates token-level pruning with caching techniques to tackle this computational challenge. By employing noise relative magnitude, we identify significant token changes across denoising iterations. Additionally, we enhance token selection by incorporating spatial clustering and ensuring distributional balance. Our experiments demonstrate reveal a 50%-60% reduction in computational costs while preserving the performance of the model, thereby markedly increasing the efficiency of diffusion models. The code is available atthis https URL
View on arXiv@article{cheng2025_2502.00433, title={ CAT Pruning: Cluster-Aware Token Pruning For Text-to-Image Diffusion Models }, author={ Xinle Cheng and Zhuoming Chen and Zhihao Jia }, journal={arXiv preprint arXiv:2502.00433}, year={ 2025 } }