ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2404.02747
  4. Cited By
Faster Diffusion via Temporal Attention Decomposition

Faster Diffusion via Temporal Attention Decomposition

3 April 2024
Haozhe Liu
Wentian Zhang
Jinheng Xie
Francesco Faccio
Mengmeng Xu
Tao Xiang
Mike Zheng Shou
Juan-Manuel Perez-Rua
Jürgen Schmidhuber
    DiffM
ArXivPDFHTML

Papers citing "Faster Diffusion via Temporal Attention Decomposition"

19 / 19 papers shown
Title
Can We Achieve Efficient Diffusion without Self-Attention? Distilling Self-Attention into Convolutions
Can We Achieve Efficient Diffusion without Self-Attention? Distilling Self-Attention into Convolutions
Ziyi Dong
Chengxing Zhou
Weijian Deng
Pengxu Wei
Xiangyang Ji
Liang Lin
MQ
41
0
0
30 Apr 2025
STAGE: Stemmed Accompaniment Generation through Prefix-Based Conditioning
STAGE: Stemmed Accompaniment Generation through Prefix-Based Conditioning
Giorgio Strano
Chiara Ballanti
Donato Crisostomi
Michele Mancusi
Luca Cosmo
Emanuele Rodolà
19
0
0
08 Apr 2025
Classic Video Denoising in a Machine Learning World: Robust, Fast, and Controllable
Classic Video Denoising in a Machine Learning World: Robust, Fast, and Controllable
Xin Jin
Simon Niklaus
Zhoutong Zhang
Zhihao Xia
Chunle Guo
Yuting Yang
J. Chen
Chongyi Li
VGen
38
0
0
04 Apr 2025
FastVAR: Linear Visual Autoregressive Modeling via Cached Token Pruning
FastVAR: Linear Visual Autoregressive Modeling via Cached Token Pruning
Hang Guo
Yawei Li
Taolin Zhang
J. Wang
Tao Dai
Shu-Tao Xia
Luca Benini
58
1
0
30 Mar 2025
EEdit: Rethinking the Spatial and Temporal Redundancy for Efficient Image Editing
EEdit: Rethinking the Spatial and Temporal Redundancy for Efficient Image Editing
Zexuan Yan
Yue Ma
Chang Zou
Wenteng Chen
Qifeng Chen
Linfeng Zhang
53
0
0
13 Mar 2025
FlexiDiT: Your Diffusion Transformer Can Easily Generate High-Quality Samples with Less Compute
FlexiDiT: Your Diffusion Transformer Can Easily Generate High-Quality Samples with Less Compute
Sotiris Anagnostidis
Gregor Bachmann
Yeongmin Kim
Jonas Kohler
Markos Georgopoulos
A. Sanakoyeu
Yuming Du
Albert Pumarola
Ali K. Thabet
Edgar Schönfeld
76
0
0
27 Feb 2025
CAT Pruning: Cluster-Aware Token Pruning For Text-to-Image Diffusion Models
CAT Pruning: Cluster-Aware Token Pruning For Text-to-Image Diffusion Models
Xinle Cheng
Zhuoming Chen
Zhihao Jia
DiffM
VLM
36
1
0
01 Feb 2025
Accelerate High-Quality Diffusion Models with Inner Loop Feedback
Accelerate High-Quality Diffusion Models with Inner Loop Feedback
M. Gwilliam
Han Cai
Di Wu
Abhinav Shrivastava
Zhiyu Cheng
90
0
0
22 Jan 2025
From Slow Bidirectional to Fast Autoregressive Video Diffusion Models
From Slow Bidirectional to Fast Autoregressive Video Diffusion Models
Tianwei Yin
Qiang Zhang
Richard Zhang
William T. Freeman
F. Durand
Eli Shechtman
Xun Huang
VGen
DiffM
79
11
0
10 Dec 2024
TASR: Timestep-Aware Diffusion Model for Image Super-Resolution
TASR: Timestep-Aware Diffusion Model for Image Super-Resolution
Qinwei Lin
Xiaopeng Sun
Yu Gao
Yujie Zhong
Dengjie Li
Zheng Zhao
Haoqian Wang
69
0
0
04 Dec 2024
SmoothCache: A Universal Inference Acceleration Technique for Diffusion
  Transformers
SmoothCache: A Universal Inference Acceleration Technique for Diffusion Transformers
Joseph Liu
Joshua Geddes
Ziyu Guo
Haomiao Jiang
Mahesh Kumar Nandwana
44
0
0
15 Nov 2024
FORA: Fast-Forward Caching in Diffusion Transformer Acceleration
FORA: Fast-Forward Caching in Diffusion Transformer Acceleration
Pratheba Selvaraju
Tianyu Ding
Tianyi Chen
Ilya Zharkov
Luming Liang
29
20
0
01 Jul 2024
AsyncDiff: Parallelizing Diffusion Models by Asynchronous Denoising
AsyncDiff: Parallelizing Diffusion Models by Asynchronous Denoising
Zigeng Chen
Xinyin Ma
Gongfan Fang
Zhenxiong Tan
Xinchao Wang
39
7
0
11 Jun 2024
$Δ$-DiT: A Training-Free Acceleration Method Tailored for Diffusion
  Transformers
ΔΔΔ-DiT: A Training-Free Acceleration Method Tailored for Diffusion Transformers
Pengtao Chen
Mingzhu Shen
Peng Ye
Jianjian Cao
Chongjun Tu
C. Bouganis
Yiren Zhao
Tao Chen
50
26
0
03 Jun 2024
Towards Understanding the Working Mechanism of Text-to-Image Diffusion
  Model
Towards Understanding the Working Mechanism of Text-to-Image Diffusion Model
Mingyang Yi
Aoxue Li
Yi Xin
Zhenguo Li
DiffM
29
11
0
24 May 2024
Lazy Layers to Make Fine-Tuned Diffusion Models More Traceable
Lazy Layers to Make Fine-Tuned Diffusion Models More Traceable
Haozhe Liu
Wentian Zhang
Bing Li
Bernard Ghanem
Jürgen Schmidhuber
DiffM
WIGM
AAML
21
1
0
01 May 2024
Stable Video Diffusion: Scaling Latent Video Diffusion Models to Large
  Datasets
Stable Video Diffusion: Scaling Latent Video Diffusion Models to Large Datasets
A. Blattmann
Tim Dockhorn
Sumith Kulal
Daniel Mendelevitch
Maciej Kilian
...
Zion English
Vikram S. Voleti
Adam Letts
Varun Jampani
Robin Rombach
VGen
150
985
0
25 Nov 2023
Zero-Shot Text-to-Image Generation
Zero-Shot Text-to-Image Generation
Aditya A. Ramesh
Mikhail Pavlov
Gabriel Goh
Scott Gray
Chelsea Voss
Alec Radford
Mark Chen
Ilya Sutskever
VLM
253
4,735
0
24 Feb 2021
U-Net: Convolutional Networks for Biomedical Image Segmentation
U-Net: Convolutional Networks for Biomedical Image Segmentation
Olaf Ronneberger
Philipp Fischer
Thomas Brox
SSeg
3DV
229
74,467
0
18 May 2015
1