Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2403.19144
Cited By
MoDiTalker: Motion-Disentangled Diffusion Model for High-Fidelity Talking Head Generation
28 March 2024
Seyeon Kim
Siyoon Jin
Jihye Park
Kihong Kim
Jiyoung Kim
Jisu Nam
Seungryong Kim
DiffM
VGen
Re-assign community
ArXiv
PDF
HTML
Papers citing
"MoDiTalker: Motion-Disentangled Diffusion Model for High-Fidelity Talking Head Generation"
5 / 5 papers shown
Title
Human Motion Diffusion Model
Guy Tevet
Sigal Raab
Brian Gordon
Yonatan Shafir
Daniel Cohen-Or
Amit H. Bermano
DiffM
VGen
188
723
0
29 Sep 2022
One-shot Talking Face Generation from Single-speaker Audio-Visual Correlation Learning
Suzhe Wang
Lincheng Li
Yueqing Ding
Xin Yu
CVBM
59
117
0
06 Dec 2021
PIRenderer: Controllable Portrait Image Generation via Semantic Neural Rendering
Yurui Ren
Gezhong Li
Yuanqi Chen
Thomas H. Li
Shan Liu
DiffM
VGen
49
224
0
17 Sep 2021
Is Space-Time Attention All You Need for Video Understanding?
Gedas Bertasius
Heng Wang
Lorenzo Torresani
ViT
280
1,981
0
09 Feb 2021
VoxCeleb2: Deep Speaker Recognition
Joon Son Chung
Arsha Nagrani
Andrew Zisserman
214
2,233
0
14 Jun 2018
1