ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2312.10201
  4. Cited By
CARAT: Contrastive Feature Reconstruction and Aggregation for
  Multi-Modal Multi-Label Emotion Recognition

CARAT: Contrastive Feature Reconstruction and Aggregation for Multi-Modal Multi-Label Emotion Recognition

15 December 2023
Cheng Peng
Ke Chen
Lidan Shou
Gang Chen
ArXivPDFHTML

Papers citing "CARAT: Contrastive Feature Reconstruction and Aggregation for Multi-Modal Multi-Label Emotion Recognition"

5 / 5 papers shown
Title
RAMer: Reconstruction-based Adversarial Model for Multi-party Multi-modal Multi-label Emotion Recognition
RAMer: Reconstruction-based Adversarial Model for Multi-party Multi-modal Multi-label Emotion Recognition
Xudong Yang
Yizhang Zhu
Nan Tang
Yuyu Luo
39
0
0
09 Feb 2025
Recent Trends of Multimodal Affective Computing: A Survey from NLP
  Perspective
Recent Trends of Multimodal Affective Computing: A Survey from NLP Perspective
Guimin Hu
Yi Xin
Weimin Lyu
Haojian Huang
Chang Sun
Z. Zhu
Lin Gui
Ruichu Cai
Erik Cambria
Hasti Seifi
30
5
0
11 Sep 2024
Enhancing Emotion Recognition in Conversation through Emotional
  Cross-Modal Fusion and Inter-class Contrastive Learning
Enhancing Emotion Recognition in Conversation through Emotional Cross-Modal Fusion and Inter-class Contrastive Learning
Haoxiang Shi
Xulong Zhang
Ning Cheng
Yong Zhang
Jun Yu
Jing Xiao
Jianzong Wang
11
1
0
28 May 2024
Non-verbal information in spontaneous speech -- towards a new framework
  of analysis
Non-verbal information in spontaneous speech -- towards a new framework of analysis
Tirza Biron
Moshe Barboy
Eran Ben-Artzy
Alona Golubchik
Yanir Marmor
Smadar Szekely
Yaron Winter
David Harel
21
0
0
06 Mar 2024
CrossCLR: Cross-modal Contrastive Learning For Multi-modal Video
  Representations
CrossCLR: Cross-modal Contrastive Learning For Multi-modal Video Representations
Mohammadreza Zolfaghari
Yi Zhu
Peter V. Gehler
Thomas Brox
111
122
0
30 Sep 2021
1