29

ATL-Diff: Audio-Driven Talking Head Generation with Early Landmarks-Guide Noise Diffusion

Advanced Video and Signal Based Surveillance (AVSS), 2025
Hoang-Son Vo
Quang-Vinh Nguyen
Seungwon Kim
Hyung-Jeong Yang
Soonja Yeom
Soo-Hyung Kim
Main:5 Pages
4 Figures
Bibliography:1 Pages
3 Tables
Abstract

Audio-driven talking head generation requires precise synchronization between facial animations and audio signals. This paper introduces ATL-Diff, a novel approach addressing synchronization limitations while reducing noise and computational costs. Our framework features three key components: a Landmark Generation Module converting audio to facial landmarks, a Landmarks-Guide Noise approach that decouples audio by distributing noise according to landmarks, and a 3D Identity Diffusion network preserving identity characteristics. Experiments on MEAD and CREMA-D datasets demonstrate that ATL-Diff outperforms state-of-the-art methods across all metrics. Our approach achieves near real-time processing with high-quality animations, computational efficiency, and exceptional preservation of facial nuances. This advancement offers promising applications for virtual assistants, education, medical communication, and digital platforms. The source code is available at: \href{this https URL}{this https URL}

View on arXiv
Comments on this paper