ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2403.19144
  4. Cited By
MoDiTalker: Motion-Disentangled Diffusion Model for High-Fidelity
  Talking Head Generation

MoDiTalker: Motion-Disentangled Diffusion Model for High-Fidelity Talking Head Generation

28 March 2024
Seyeon Kim
Siyoon Jin
Jihye Park
Kihong Kim
Jiyoung Kim
Jisu Nam
Seungryong Kim
    DiffM
    VGen
ArXivPDFHTML

Papers citing "MoDiTalker: Motion-Disentangled Diffusion Model for High-Fidelity Talking Head Generation"

5 / 5 papers shown
Title
Human Motion Diffusion Model
Human Motion Diffusion Model
Guy Tevet
Sigal Raab
Brian Gordon
Yonatan Shafir
Daniel Cohen-Or
Amit H. Bermano
DiffM
VGen
188
723
0
29 Sep 2022
One-shot Talking Face Generation from Single-speaker Audio-Visual
  Correlation Learning
One-shot Talking Face Generation from Single-speaker Audio-Visual Correlation Learning
Suzhe Wang
Lincheng Li
Yueqing Ding
Xin Yu
CVBM
59
117
0
06 Dec 2021
PIRenderer: Controllable Portrait Image Generation via Semantic Neural
  Rendering
PIRenderer: Controllable Portrait Image Generation via Semantic Neural Rendering
Yurui Ren
Gezhong Li
Yuanqi Chen
Thomas H. Li
Shan Liu
DiffM
VGen
49
224
0
17 Sep 2021
Is Space-Time Attention All You Need for Video Understanding?
Is Space-Time Attention All You Need for Video Understanding?
Gedas Bertasius
Heng Wang
Lorenzo Torresani
ViT
280
1,981
0
09 Feb 2021
VoxCeleb2: Deep Speaker Recognition
VoxCeleb2: Deep Speaker Recognition
Joon Son Chung
Arsha Nagrani
Andrew Zisserman
214
2,233
0
14 Jun 2018
1