ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1903.10195
20
79

Wav2Pix: Speech-conditioned Face Generation using Generative Adversarial Networks

25 March 2019
A. Duarte
Francisco Roldan
Miquel Tubau
Janna Escur
Santiago Pascual
Amaia Salvador
Eva Mohedano
Kevin McGuinness
Jordi Torres
Xavier Giró-i-Nieto
    GAN
    CVBM
ArXivPDFHTML
Abstract

Speech is a rich biometric signal that contains information about the identity, gender and emotional state of the speaker. In this work, we explore its potential to generate face images of a speaker by conditioning a Generative Adversarial Network (GAN) with raw speech input. We propose a deep neural network that is trained from scratch in an end-to-end fashion, generating a face directly from the raw speech waveform without any additional identity information (e.g reference image or one-hot encoding). Our model is trained in a self-supervised approach by exploiting the audio and visual signals naturally aligned in videos. With the purpose of training from video data, we present a novel dataset collected for this work, with high-quality videos of youtubers with notable expressiveness in both the speech and visual signals.

View on arXiv
Comments on this paper