35
0

TA-V2A: Textually Assisted Video-to-Audio Generation

Abstract

As artificial intelligence-generated content (AIGC) continues to evolve, video-to-audio (V2A) generation has emerged as a key area with promising applications in multimedia editing, augmented reality, and automated content creation. While Transformer and Diffusion models have advanced audio generation, a significant challenge persists in extracting precise semantic information from videos, as current models often lose sequential context by relying solely on frame-based features. To address this, we present TA-V2A, a method that integrates language, audio, and video features to improve semantic representation in latent space. By incorporating large language models for enhanced video comprehension, our approach leverages text guidance to enrich semantic expression. Our diffusion model-based system utilizes automated text modulation to enhance inference quality and efficiency, providing personalized control through text-guided interfaces. This integration enhances semantic expression while ensuring temporal alignment, leading to more accurate and coherent video-to-audio generation.

View on arXiv
@article{you2025_2503.10700,
  title={ TA-V2A: Textually Assisted Video-to-Audio Generation },
  author={ Yuhuan You and Xihong Wu and Tianshu Qu },
  journal={arXiv preprint arXiv:2503.10700},
  year={ 2025 }
}
Comments on this paper