ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2404.10445
  4. Cited By
SparseDM: Toward Sparse Efficient Diffusion Models

SparseDM: Toward Sparse Efficient Diffusion Models

16 April 2024
Kafeng Wang
Jianfei Chen
He Li
Zhenpeng Mi
Jun-Jie Zhu
    DiffM
ArXivPDFHTML

Papers citing "SparseDM: Toward Sparse Efficient Diffusion Models"

9 / 9 papers shown
Title
Sparse-to-Sparse Training of Diffusion Models
Sparse-to-Sparse Training of Diffusion Models
Inês Cardoso Oliveira
Decebal Constantin Mocanu
Luis A. Leiva
DiffM
78
0
0
30 Apr 2025
DyDiT++: Dynamic Diffusion Transformers for Efficient Visual Generation
DyDiT++: Dynamic Diffusion Transformers for Efficient Visual Generation
Wangbo Zhao
Yizeng Han
Jiasheng Tang
Kai Wang
Hao Luo
Yibing Song
Gao Huang
Fan Wang
Yang You
59
0
0
09 Apr 2025
SQ-DM: Accelerating Diffusion Models with Aggressive Quantization and Temporal Sparsity
Zichen Fan
Steve Dai
Rangharajan Venkatesan
Dennis Sylvester
Brucek Khailany
MQ
43
0
0
28 Jan 2025
Dynamic Diffusion Transformer
Dynamic Diffusion Transformer
Wangbo Zhao
Yizeng Han
Jiasheng Tang
Kai Wang
Yibing Song
Gao Huang
Fan Wang
Yang You
65
11
0
04 Oct 2024
Accelerating Transformer Pre-training with 2:4 Sparsity
Accelerating Transformer Pre-training with 2:4 Sparsity
Yuezhou Hu
Kang Zhao
Weiyu Huang
Jianfei Chen
Jun Zhu
42
2
0
02 Apr 2024
LCM-LoRA: A Universal Stable-Diffusion Acceleration Module
LCM-LoRA: A Universal Stable-Diffusion Acceleration Module
Simian Luo
Yiqin Tan
Suraj Patil
Daniel Gu
Patrick von Platen
Apolinário Passos
Longbo Huang
Jian Li
Hang Zhao
MoMe
108
139
0
09 Nov 2023
DepGraph: Towards Any Structural Pruning
DepGraph: Towards Any Structural Pruning
Gongfan Fang
Xinyin Ma
Mingli Song
Michael Bi Mi
Xinchao Wang
GNN
79
245
0
30 Jan 2023
Accelerated Sparse Neural Training: A Provable and Efficient Method to
  Find N:M Transposable Masks
Accelerated Sparse Neural Training: A Provable and Efficient Method to Find N:M Transposable Masks
Itay Hubara
Brian Chmiel
Moshe Island
Ron Banner
S. Naor
Daniel Soudry
44
89
0
16 Feb 2021
Sparsity in Deep Learning: Pruning and growth for efficient inference
  and training in neural networks
Sparsity in Deep Learning: Pruning and growth for efficient inference and training in neural networks
Torsten Hoefler
Dan Alistarh
Tal Ben-Nun
Nikoli Dryden
Alexandra Peste
MQ
128
679
0
31 Jan 2021
1