ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2407.15811
24
8

Stretching Each Dollar: Diffusion Training from Scratch on a Micro-Budget

22 July 2024
Vikash Sehwag
Xianghao Kong
Jingtao Li
Michael Spranger
Lingjuan Lyu
    DiffM
ArXivPDFHTML
Abstract

As scaling laws in generative AI push performance, they also simultaneously concentrate the development of these models among actors with large computational resources. With a focus on text-to-image (T2I) generative models, we aim to address this bottleneck by demonstrating very low-cost training of large-scale T2I diffusion transformer models. As the computational cost of transformers increases with the number of patches in each image, we propose to randomly mask up to 75% of the image patches during training. We propose a deferred masking strategy that preprocesses all patches using a patch-mixer before masking, thus significantly reducing the performance degradation with masking, making it superior to model downscaling in reducing computational cost. We also incorporate the latest improvements in transformer architecture, such as the use of mixture-of-experts layers, to improve performance and further identify the critical benefit of using synthetic images in micro-budget training. Finally, using only 37M publicly available real and synthetic images, we train a 1.16 billion parameter sparse transformer with only \1,890economicalcostandachievea12.7FIDinzero−shotgenerationontheCOCOdataset.Notably,ourmodelachievescompetitiveFIDandhigh−qualitygenerationswhileincurring1181,890 economical cost and achieve a 12.7 FID in zero-shot generation on the COCO dataset. Notably, our model achieves competitive FID and high-quality generations while incurring 1181,890economicalcostandachievea12.7FIDinzero−shotgenerationontheCOCOdataset.Notably,ourmodelachievescompetitiveFIDandhigh−qualitygenerationswhileincurring118\timeslowercostthanstablediffusionmodelsand14 lower cost than stable diffusion models and 14lowercostthanstablediffusionmodelsand14\times lower cost than the current state-of-the-art approach that costs \28,400. We aim to release our end-to-end training pipeline to further democratize the training of large-scale diffusion models on micro-budgets.

View on arXiv
Comments on this paper