VoiceCraft-Dub: Automated Video Dubbing with Neural Codec Language Models

We present VoiceCraft-Dub, a novel approach for automated video dubbing that synthesizes high-quality speech from text and facial cues. This task has broad applications in filmmaking, multimedia creation, and assisting voice-impaired individuals. Building on the success of Neural Codec Language Models (NCLMs) for speech synthesis, our method extends their capabilities by incorporating video features, ensuring that synthesized speech is time-synchronized and expressively aligned with facial movements while preserving natural prosody. To inject visual cues, we design adapters to align facial features with the NCLM token space and introduce audio-visual fusion layers to merge audio-visual information within the NCLM framework. Additionally, we curate CelebV-Dub, a new dataset of expressive, real-world videos specifically designed for automated video dubbing. Extensive experiments show that our model achieves high-quality, intelligible, and natural speech synthesis with accurate lip synchronization, outperforming existing methods in human perception and performing favorably in objective evaluations. We also adapt VoiceCraft-Dub for the video-to-speech task, demonstrating its versatility for various applications.
View on arXiv@article{sung-bin2025_2504.02386, title={ VoiceCraft-Dub: Automated Video Dubbing with Neural Codec Language Models }, author={ Kim Sung-Bin and Jeongsoo Choi and Puyuan Peng and Joon Son Chung and Tae-Hyun Oh and David Harwath }, journal={arXiv preprint arXiv:2504.02386}, year={ 2025 } }