43
v1v2 (latest)

DisCa: Accelerating Video Diffusion Transformers with Distillation-Compatible Learnable Feature Caching

Chang Zou
Changlin Li
Yang Li
Patrol Li
Jianbing Wu
Xiao He
Songtao Liu
Zhao Zhong
Kailin Huang
Linfeng Zhang
Main:8 Pages
7 Figures
Bibliography:4 Pages
5 Tables
Appendix:5 Pages
Abstract

While diffusion models have achieved great success in the field of video generation, this progress is accompanied by a rapidly escalating computational burden. Among the existing acceleration methods, Feature Caching is popular due to its training-free property and considerable speedup performance, but it inevitably faces semantic and detail drop with further compression. Another widely adopted method, training-aware step-distillation, though successful in image generation, also faces drastic degradation in video generation with a few steps. Furthermore, the quality loss becomes more severe when simply applying training-free feature caching to the step-distilled models, due to the sparser sampling steps. This paper novelly introduces a distillation-compatible learnable feature caching mechanism for the first time. We employ a lightweight learnable neural predictor instead of traditional training-free heuristics for diffusion models, enabling a more accurate capture of the high-dimensional feature evolution process. Furthermore, we explore the challenges of highly compressed distillation on large-scale video models and propose a conservative Restricted MeanFlow approach to achieve more stable and lossless distillation. By undertaking these initiatives, we further push the acceleration boundaries to 11.8×11.8\times while preserving generation quality. Extensive experiments demonstrate the effectiveness of our method. The code will be made publicly available soon.

View on arXiv
Comments on this paper