Less is Enough: Training-Free Video Diffusion Acceleration via Runtime-Adaptive Caching

Video generation models have demonstrated remarkable performance, yet their broader adoption remains constrained by slow inference speeds and substantial computational costs, primarily due to the iterative nature of the denoising process. Addressing this bottleneck is essential for democratizing advanced video synthesis technologies and enabling their integration into real-world applications. This work proposes EasyCache, a training-free acceleration framework for video diffusion models. EasyCache introduces a lightweight, runtime-adaptive caching mechanism that dynamically reuses previously computed transformation vectors, avoiding redundant computations during inference. Unlike prior approaches, EasyCache requires no offline profiling, pre-computation, or extensive parameter tuning. We conduct comprehensive studies on various large-scale video generation models, including OpenSora, Wan2.1, and HunyuanVideo. Our method achieves leading acceleration performance, reducing inference time by up to 2.1-3.3 compared to the original baselines while maintaining high visual fidelity with a significant up to 36% PSNR improvement compared to the previous SOTA method. This improvement makes our EasyCache a efficient and highly accessible solution for high-quality video generation in both research and practical applications. The code is available atthis https URL.
View on arXiv@article{zhou2025_2507.02860, title={ Less is Enough: Training-Free Video Diffusion Acceleration via Runtime-Adaptive Caching }, author={ Xin Zhou and Dingkang Liang and Kaijin Chen and Tianrui Feng and Xiwu Chen and Hongkai Lin and Yikang Ding and Feiyang Tan and Hengshuang Zhao and Xiang Bai }, journal={arXiv preprint arXiv:2507.02860}, year={ 2025 } }