Foresight: Adaptive Layer Reuse for Accelerated and High-Quality Text-to-Video Generation
- DiffMVGen

Diffusion Transformers (DiTs) achieve state-of-the-art results in text-to-image, text-to-video generation, and editing. However, their large model size and the quadratic cost of spatial-temporal attention over multiple denoising steps make video generation computationally expensive. Static caching mitigates this by reusing features across fixed steps but fails to adapt to generation dynamics, leading to suboptimal trade-offs between speed and quality.We propose Foresight, an adaptive layer-reuse technique that reduces computational redundancy across denoising steps while preserving baseline performance. Foresight dynamically identifies and reuses DiT block outputs for all layers across steps, adapting to generation parameters such as resolution and denoising schedules to optimize efficiency. Applied to OpenSora, Latte, and CogVideoX, Foresight achieves up to 1.63x end-to-end speedup, while maintaining video quality. The source code of Foresight is available at \texttt{this https URL}.
View on arXiv@article{adnan2025_2506.00329, title={ Foresight: Adaptive Layer Reuse for Accelerated and High-Quality Text-to-Video Generation }, author={ Muhammad Adnan and Nithesh Kurella and Akhil Arunkumar and Prashant J. Nair }, journal={arXiv preprint arXiv:2506.00329}, year={ 2025 } }