ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2002.08267
  4. Cited By
Multilogue-Net: A Context Aware RNN for Multi-modal Emotion Detection
  and Sentiment Analysis in Conversation

Multilogue-Net: A Context Aware RNN for Multi-modal Emotion Detection and Sentiment Analysis in Conversation

19 February 2020
Aman Shenoy
Ashish Sardana
ArXivPDFHTML

Papers citing "Multilogue-Net: A Context Aware RNN for Multi-modal Emotion Detection and Sentiment Analysis in Conversation"

11 / 11 papers shown
Title
Multimodal Emotion Recognition using Audio-Video Transformer Fusion with Cross Attention
Multimodal Emotion Recognition using Audio-Video Transformer Fusion with Cross Attention
Joe Dhanith
Shravan Venkatraman
Modigari Narendra
Vigya Sharma
Santhosh Malarvannan
74
0
0
20 Feb 2025
Multi-Modality Collaborative Learning for Sentiment Analysis
Multi-Modality Collaborative Learning for Sentiment Analysis
Shanmin Wang
Chengguang Liu
Qingshan Liu
37
0
0
21 Jan 2025
End-to-end Semantic-centric Video-based Multimodal Affective Computing
End-to-end Semantic-centric Video-based Multimodal Affective Computing
Ronghao Lin
Ying Zeng
Sijie Mai
Haifeng Hu
VGen
40
0
0
14 Aug 2024
Target conversation extraction: Source separation using turn-taking
  dynamics
Target conversation extraction: Source separation using turn-taking dynamics
Tuochao Chen
Qirui Wang
Bohan Wu
Malek Itani
Sefik Emre Eskimez
Takuya Yoshioka
Shyamnath Gollakota
20
4
0
15 Jul 2024
Multimodal Vision Transformers with Forced Attention for Behavior
  Analysis
Multimodal Vision Transformers with Forced Attention for Behavior Analysis
Tanay Agrawal
Michal Balazia
Philippe Muller
Franccois Brémond
ViT
20
9
0
07 Dec 2022
Multimodal Information Bottleneck: Learning Minimal Sufficient Unimodal
  and Multimodal Representations
Multimodal Information Bottleneck: Learning Minimal Sufficient Unimodal and Multimodal Representations
Sijie Mai
Ying Zeng
Haifeng Hu
32
67
0
31 Oct 2022
TVLT: Textless Vision-Language Transformer
TVLT: Textless Vision-Language Transformer
Zineng Tang
Jaemin Cho
Yixin Nie
Mohit Bansal
VLM
49
28
0
28 Sep 2022
Multimodal Emotion Recognition with Modality-Pairwise Unsupervised
  Contrastive Loss
Multimodal Emotion Recognition with Modality-Pairwise Unsupervised Contrastive Loss
Riccardo Franceschini
Enrico Fini
Cigdem Beyan
Alessandro Conti
F. Arrigoni
Elisa Ricci
SSL
OffRL
34
16
0
23 Jul 2022
MMLatch: Bottom-up Top-down Fusion for Multimodal Sentiment Analysis
MMLatch: Bottom-up Top-down Fusion for Multimodal Sentiment Analysis
Georgios Paraskevopoulos
Efthymios Georgiou
Alexandros Potamianos
11
26
0
24 Jan 2022
Modulated Fusion using Transformer for Linguistic-Acoustic Emotion
  Recognition
Modulated Fusion using Transformer for Linguistic-Acoustic Emotion Recognition
Jean-Benoit Delbrouck
Noé Tits
Stéphane Dupont
12
20
0
05 Oct 2020
Improving Humanness of Virtual Agents and Users' Cooperation through
  Emotions
Improving Humanness of Virtual Agents and Users' Cooperation through Emotions
M. Ghafurian
Neil Budnarain
Jesse Hoey
9
10
0
10 Mar 2019
1