ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2402.05712
  4. Cited By
DiffSpeaker: Speech-Driven 3D Facial Animation with Diffusion
  Transformer

DiffSpeaker: Speech-Driven 3D Facial Animation with Diffusion Transformer

8 February 2024
Zhiyuan Ma
Xiangyu Zhu
Guojun Qi
Chen Qian
Zhaoxiang Zhang
Zhen Lei
ArXivPDFHTML

Papers citing "DiffSpeaker: Speech-Driven 3D Facial Animation with Diffusion Transformer"

12 / 12 papers shown
Title
DiffusionTalker: Efficient and Compact Speech-Driven 3D Talking Head via Personalizer-Guided Distillation
DiffusionTalker: Efficient and Compact Speech-Driven 3D Talking Head via Personalizer-Guided Distillation
Peng Chen
Xiaobao Wei
Ming Lu
Hui Chen
Feng Tian
39
1
0
23 Mar 2025
Personalized Generation In Large Model Era: A Survey
Yiyan Xu
Jinghao Zhang
Alireza Salemi
Xinting Hu
W. Wang
Fuli Feng
Hamed Zamani
Xiangnan He
Tat-Seng Chua
3DV
75
2
0
04 Mar 2025
ARTalk: Speech-Driven 3D Head Animation via Autoregressive Model
ARTalk: Speech-Driven 3D Head Animation via Autoregressive Model
Xuangeng Chu
Nabarun Goswami
Ziteng Cui
Hanqin Wang
Tatsuya Harada
DiffM
71
0
0
27 Feb 2025
ProbTalk3D: Non-Deterministic Emotion Controllable Speech-Driven 3D Facial Animation Synthesis Using VQ-VAE
ProbTalk3D: Non-Deterministic Emotion Controllable Speech-Driven 3D Facial Animation Synthesis Using VQ-VAE
Sichun Wu
Kazi Injamamul Haque
Zerrin Yumak
VGen
28
2
0
12 Sep 2024
Content and Style Aware Audio-Driven Facial Animation
Content and Style Aware Audio-Driven Facial Animation
Qingju Liu
Hyeongwoo Kim
Gaurav Bharaj
DiffM
24
1
0
13 Aug 2024
InterAct: Capture and Modelling of Realistic, Expressive and Interactive
  Activities between Two Persons in Daily Scenarios
InterAct: Capture and Modelling of Realistic, Expressive and Interactive Activities between Two Persons in Daily Scenarios
Yinghao Huang
Leo Ho
Dafei Qin
Mingyi Shi
Taku Komura
VGen
32
1
0
19 May 2024
AnimateMe: 4D Facial Expressions via Diffusion Models
AnimateMe: 4D Facial Expressions via Diffusion Models
Dimitrios Gerogiannis
Foivos Paraperas-Papantoniou
Rolandos Alexandros Potamias
Alexandros Lattas
Stylianos Moschoglou
Stylianos Ploumpis
S. Zafeiriou
22
3
0
25 Mar 2024
SAiD: Speech-driven Blendshape Facial Animation with Diffusion
SAiD: Speech-driven Blendshape Facial Animation with Diffusion
Inkyu Park
Jaewoong Cho
26
4
0
25 Dec 2023
SpeechAct: Towards Generating Whole-body Motion from Speech
Jinsong Zhang
Minjie Zhu
Yuxiang Zhang
Yebin Liu
Kun Li
21
0
0
29 Nov 2023
SelfTalk: A Self-Supervised Commutative Training Diagram to Comprehend
  3D Talking Faces
SelfTalk: A Self-Supervised Commutative Training Diagram to Comprehend 3D Talking Faces
Ziqiao Peng
Yihao Luo
Yue Shi
Hao-Xuan Xu
Xiangyu Zhu
Jun He
Hongyan Liu
Zhaoxin Fan
47
39
0
19 Jun 2023
VideoFusion: Decomposed Diffusion Models for High-Quality Video
  Generation
VideoFusion: Decomposed Diffusion Models for High-Quality Video Generation
Zhengxiong Luo
Dayou Chen
Yingya Zhang
Yan Huang
Liangsheng Wang
Yujun Shen
Deli Zhao
Jinren Zhou
Tien-Ping Tan
DiffM
VGen
132
215
0
15 Mar 2023
Train Short, Test Long: Attention with Linear Biases Enables Input
  Length Extrapolation
Train Short, Test Long: Attention with Linear Biases Enables Input Length Extrapolation
Ofir Press
Noah A. Smith
M. Lewis
242
690
0
27 Aug 2021
1