E-MD3C: Taming Masked Diffusion Transformers for Efficient Zero-Shot Object Customization
We propose E-MD3C (fficient asked iffusion Transformer with Disentangled onditions and ompact ollector), a highly efficient framework for zero-shot object image customization. Unlike prior works reliant on resource-intensive Unet architectures, our approach employs lightweight masked diffusion transformers operating on latent patches, offering significantly improved computational efficiency. The framework integrates three core components: (1) an efficient masked diffusion transformer for processing autoencoder latents, (2) a disentangled condition design that ensures compactness while preserving background alignment and fine details, and (3) a learnable Conditions Collector that consolidates multiple inputs into a compact representation for efficient denoising and learning. E-MD3C outperforms the existing approach on the VITON-HD dataset across metrics such as PSNR, FID, SSIM, and LPIPS, demonstrating clear advantages in parameters, memory efficiency, and inference speed. With only of the parameters, our Transformer-based 468M model delivers faster inference and uses of the GPU memory compared to an 1720M Unet-based latent diffusion model.
View on arXiv