ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2402.01516
33
6

Cross-view Masked Diffusion Transformers for Person Image Synthesis

2 February 2024
T. Pham
Zhang Kang
Chang-Dong Yoo
ArXivPDFHTML
Abstract

We present X-MDPT (Cross‾\underline{Cross}Cross​-view M‾\underline{M}M​asked D‾\underline{D}D​iffusion P‾\underline{P}P​rediction T‾\underline{T}T​ransformers), a novel diffusion model designed for pose-guided human image generation. X-MDPT distinguishes itself by employing masked diffusion transformers that operate on latent patches, a departure from the commonly-used Unet structures in existing works. The model comprises three key modules: 1) a denoising diffusion Transformer, 2) an aggregation network that consolidates conditions into a single vector for the diffusion process, and 3) a mask cross-prediction module that enhances representation learning with semantic information from the reference image. X-MDPT demonstrates scalability, improving FID, SSIM, and LPIPS with larger models. Despite its simple design, our model outperforms state-of-the-art approaches on the DeepFashion dataset while exhibiting efficiency in terms of training parameters, training time, and inference speed. Our compact 33MB model achieves an FID of 7.42, surpassing a prior Unet latent diffusion approach (FID 8.07) using only 11×11\times11× fewer parameters. Our best model surpasses the pixel-based diffusion with 23\frac{2}{3}32​ of the parameters and achieves 5.43×5.43 \times5.43× faster inference. The code is available at https://github.com/trungpx/xmdpt.

View on arXiv
Comments on this paper