ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.17198
39
0

Dimitra: Audio-driven Diffusion model for Expressive Talking Head Generation

24 February 2025
Baptiste Chopin
Tashvik Dhamija
P. Balaji
Yaohui Wang
A. Dantcheva
    DiffM
    VGen
ArXivPDFHTML
Abstract

We propose Dimitra, a novel framework for audio-driven talking head generation, streamlined to learn lip motion, facial expression, as well as head pose motion. Specifically, we train a conditional Motion Diffusion Transformer (cMDT) by modeling facial motion sequences with 3D representation. We condition the cMDT with only two input signals, an audio-sequence, as well as a reference facial image. By extracting additional features directly from audio, Dimitra is able to increase quality and realism of generated videos. In particular, phoneme sequences contribute to the realism of lip motion, whereas text transcript to facial expression and head pose realism. Quantitative and qualitative experiments on two widely employed datasets, VoxCeleb2 and HDTF, showcase that Dimitra is able to outperform existing approaches for generating realistic talking heads imparting lip motion, facial expression, and head pose.

View on arXiv
@article{chopin2025_2502.17198,
  title={ Dimitra: Audio-driven Diffusion model for Expressive Talking Head Generation },
  author={ Baptiste Chopin and Tashvik Dhamija and Pranav Balaji and Yaohui Wang and Antitza Dantcheva },
  journal={arXiv preprint arXiv:2502.17198},
  year={ 2025 }
}
Comments on this paper