295
v1v2 (latest)

MDSGen: Fast and Efficient Masked Diffusion Temporal-Aware Transformers for Open-Domain Sound Generation

International Conference on Learning Representations (ICLR), 2024
Chang D. Yoo
Main:10 Pages
19 Figures
Bibliography:4 Pages
17 Tables
Appendix:12 Pages
Abstract

We introduce MDSGen, a novel framework for vision-guided open-domain sound generation optimized for model parameter size, memory consumption, and inference speed. This framework incorporates two key innovations: (1) a redundant video feature removal module that filters out unnecessary visual information, and (2) a temporal-aware masking strategy that leverages temporal context for enhanced audio generation accuracy. In contrast to existing resource-heavy Unet-based models, \texttt{MDSGen} employs denoising masked diffusion transformers, facilitating efficient generation without reliance on pre-trained diffusion models. Evaluated on the benchmark VGGSound dataset, our smallest model (5M parameters) achieves 97.997.9% alignment accuracy, using 172×172\times fewer parameters, 371371% less memory, and offering 36×36\times faster inference than the current 860M-parameter state-of-the-art model (93.993.9% accuracy). The larger model (131M parameters) reaches nearly 9999% accuracy while requiring 6.5×6.5\times fewer parameters. These results highlight the scalability and effectiveness of our approach. The code is available atthis https URL.

View on arXiv
Comments on this paper