ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1801.09092
  4. Cited By
Interactive Generative Adversarial Networks for Facial Expression
  Generation in Dyadic Interactions
v1v2 (latest)

Interactive Generative Adversarial Networks for Facial Expression Generation in Dyadic Interactions

27 January 2018
Behnaz Nojavanasghari
Yuchi Huang
Saad M. Khan
    CVBMGAN
ArXiv (abs)PDFHTML

Papers citing "Interactive Generative Adversarial Networks for Facial Expression Generation in Dyadic Interactions"

16 / 16 papers shown
ReactDiff: Fundamental Multiple Appropriate Facial Reaction Diffusion Model
ReactDiff: Fundamental Multiple Appropriate Facial Reaction Diffusion Model
Luo Cheng
Song Siyang
Yan Siyuan
Yu Zhen
Ge Zongyuan
144
3
0
06 Oct 2025
REACT 2025: the Third Multiple Appropriate Facial Reaction Generation Challenge
REACT 2025: the Third Multiple Appropriate Facial Reaction Generation Challenge
Siyang Song
Micol Spitale
Xiangyu Kong
Hengde Zhu
Cheng Luo
...
Tobias Baur
Fabien Ringeval
Andrew Howes
Elisabeth Andre
Hatice Gunes
217
9
0
22 May 2025
VividListener: Expressive and Controllable Listener Dynamics Modeling for Multi-Modal Responsive Interaction
VividListener: Expressive and Controllable Listener Dynamics Modeling for Multi-Modal Responsive Interaction
Shiying Li
Xingqun Qi
Bingkun Yang
Chen Weile
Zezhao Tian
Muyi Sun
Qifeng Liu
Man Zhang
Zhenan Sun
384
2
0
30 Apr 2025
Dyadic Interaction Modeling for Social Behavior Generation
Dyadic Interaction Modeling for Social Behavior GenerationEuropean Conference on Computer Vision (ECCV), 2024
Minh Tran
Di Chang
Maksim Siniukov
Mohammad Soleymani
VGen
427
28
0
14 Mar 2024
CustomListener: Text-guided Responsive Interaction for User-friendly
  Listening Head Generation
CustomListener: Text-guided Responsive Interaction for User-friendly Listening Head Generation
Xi Liu
Ying Guo
Cheng Zhen
Tong Li
Yingying Ao
Pengfei Yan
DiffM
381
21
0
01 Mar 2024
From Audio to Photoreal Embodiment: Synthesizing Humans in Conversations
From Audio to Photoreal Embodiment: Synthesizing Humans in ConversationsComputer Vision and Pattern Recognition (CVPR), 2024
Evonne Ng
Javier Romero
Timur M. Bagautdinov
Shaojie Bai
Trevor Darrell
Angjoo Kanazawa
Alexander Richard
VGen
298
80
0
03 Jan 2024
Emotional Listener Portrait: Neural Listener Head Generation with
  Emotion
Emotional Listener Portrait: Neural Listener Head Generation with EmotionIEEE International Conference on Computer Vision (ICCV), 2023
Luchuan Song
Guojun Yin
Zhenchao Jin
Xiaoyi Dong
Chenliang Xu
492
19
0
29 Sep 2023
Can Language Models Learn to Listen?
Can Language Models Learn to Listen?IEEE International Conference on Computer Vision (ICCV), 2023
Evonne Ng
Sanjay Subramanian
Dan Klein
Angjoo Kanazawa
Trevor Darrell
Shiry Ginosar
362
42
0
21 Aug 2023
REACT2023: the first Multi-modal Multiple Appropriate Facial Reaction
  Generation Challenge
REACT2023: the first Multi-modal Multiple Appropriate Facial Reaction Generation Challenge
Siyang Song
Micol Spitale
Cheng Luo
German Barquero
Cristina Palmero
...
Michel Valstar
Tobias Baur
Fabien Ringeval
Elisabeth Andre
Hatice Gunes
274
12
0
11 Jun 2023
ReactFace: Multiple Appropriate Facial Reaction Generation in Dyadic
  Interactions
ReactFace: Multiple Appropriate Facial Reaction Generation in Dyadic InteractionsIEEE Transactions on Visualization and Computer Graphics (TVCG), 2023
Cheng Luo
Siyang Song
Weicheng Xie
Micol Spitale
Linlin Shen
Hatice Gunes
CVBMDiffM
160
9
0
25 May 2023
Reversible Graph Neural Network-based Reaction Distribution Learning for
  Multiple Appropriate Facial Reactions Generation
Reversible Graph Neural Network-based Reaction Distribution Learning for Multiple Appropriate Facial Reactions Generation
Tong Xu
Micol Spitale
Haozhan Tang
Lu Liu
Hatice Gunes
Siyang Song
CVBM
414
16
0
24 May 2023
Affective Faces for Goal-Driven Dyadic Communication
Affective Faces for Goal-Driven Dyadic Communication
Scott Geng
Revant Teotia
Purva Tendulkar
Sachit Menon
Carl Vondrick
VGen
162
36
0
26 Jan 2023
Learning to Listen: Modeling Non-Deterministic Dyadic Facial Motion
Learning to Listen: Modeling Non-Deterministic Dyadic Facial MotionComputer Vision and Pattern Recognition (CVPR), 2022
Evonne Ng
Hanbyul Joo
Liwen Hu
Hao Li
Trevor Darrell
Angjoo Kanazawa
Shiry Ginosar
VGen
234
140
0
18 Apr 2022
Let's Face It: Probabilistic Multi-modal Interlocutor-aware Generation
  of Facial Gestures in Dyadic Settings
Let's Face It: Probabilistic Multi-modal Interlocutor-aware Generation of Facial Gestures in Dyadic SettingsInternational Conference on Intelligent Virtual Agents (IVA), 2020
Patrik Jonell
Taras Kucherenko
G. Henter
Jonas Beskow
CVBM
473
66
0
11 Jun 2020
Artificial Intelligence in Intelligent Tutoring Robots: A Systematic
  Review and Design Guidelines
Artificial Intelligence in Intelligent Tutoring Robots: A Systematic Review and Design GuidelinesApplied Sciences (AS), 2019
Jinyu Yang
Bo Zhang
142
59
0
26 Feb 2019
Adversarial Training in Affective Computing and Sentiment Analysis:
  Recent Advances and Perspectives
Adversarial Training in Affective Computing and Sentiment Analysis: Recent Advances and Perspectives
Jing Han
Zixing Zhang
N. Cummins
Björn Schuller
248
64
0
21 Sep 2018
1
Page 1 of 1