65
0

E-MD3C: Taming Masked Diffusion Transformers for Efficient Zero-Shot Object Customization

Abstract

We propose E-MD3C (E\underline{E}fficient M\underline{M}asked D\underline{D}iffusion Transformer with Disentangled C\underline{C}onditions and C\underline{C}ompact C\underline{C}ollector), a highly efficient framework for zero-shot object image customization. Unlike prior works reliant on resource-intensive Unet architectures, our approach employs lightweight masked diffusion transformers operating on latent patches, offering significantly improved computational efficiency. The framework integrates three core components: (1) an efficient masked diffusion transformer for processing autoencoder latents, (2) a disentangled condition design that ensures compactness while preserving background alignment and fine details, and (3) a learnable Conditions Collector that consolidates multiple inputs into a compact representation for efficient denoising and learning. E-MD3C outperforms the existing approach on the VITON-HD dataset across metrics such as PSNR, FID, SSIM, and LPIPS, demonstrating clear advantages in parameters, memory efficiency, and inference speed. With only 14\frac{1}{4} of the parameters, our Transformer-based 468M model delivers 2.5×2.5\times faster inference and uses 23\frac{2}{3} of the GPU memory compared to an 1720M Unet-based latent diffusion model.

View on arXiv
@article{pham2025_2502.09164,
  title={ E-MD3C: Taming Masked Diffusion Transformers for Efficient Zero-Shot Object Customization },
  author={ Trung X. Pham and Zhang Kang and Ji Woo Hong and Xuran Zheng and Chang D. Yoo },
  journal={arXiv preprint arXiv:2502.09164},
  year={ 2025 }
}
Comments on this paper