ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1905.06860
  4. Cited By
Speaker-Independent Speech-Driven Visual Speech Synthesis using
  Domain-Adapted Acoustic Models

Speaker-Independent Speech-Driven Visual Speech Synthesis using Domain-Adapted Acoustic Models

15 May 2019
Ahmed Hussen Abdelaziz
B. Theobald
Justin Binder
Gabriele Fanelli
Paul Dixon
N. Apostoloff
T. Weise
Sachin Kajareker
ArXivPDFHTML

Papers citing "Speaker-Independent Speech-Driven Visual Speech Synthesis using Domain-Adapted Acoustic Models"

4 / 4 papers shown
Title
On the role of Lip Articulation in Visual Speech Perception
On the role of Lip Articulation in Visual Speech Perception
Zakaria Aldeneh
Masha Fedzechkina
Skyler Seto
Katherine Metcalf
Miguel Sarabia
N. Apostoloff
B. Theobald
19
1
0
18 Mar 2022
Productivity, Portability, Performance: Data-Centric Python
Productivity, Portability, Performance: Data-Centric Python
Yiheng Wang
Yao Zhang
Yanzhang Wang
Yan Wan
Jiao Wang
Zhongyuan Wu
Yuhao Yang
Bowen She
49
94
0
01 Jul 2021
Let's Face It: Probabilistic Multi-modal Interlocutor-aware Generation
  of Facial Gestures in Dyadic Settings
Let's Face It: Probabilistic Multi-modal Interlocutor-aware Generation of Facial Gestures in Dyadic Settings
Patrik Jonell
Taras Kucherenko
G. Henter
Jonas Beskow
CVBM
15
60
0
11 Jun 2020
Modality Dropout for Improved Performance-driven Talking Faces
Modality Dropout for Improved Performance-driven Talking Faces
Ahmed Hussen Abdelaziz
B. Theobald
Paul Dixon
Reinhard Knothe
N. Apostoloff
Sachin Kajareker
24
36
0
27 May 2020
1