ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2404.14952
  4. Cited By
Leveraging Speech for Gesture Detection in Multimodal Communication

Leveraging Speech for Gesture Detection in Multimodal Communication

23 April 2024
E. Ghaleb
I. Burenko
Marlou Rasenberg
Wim Pouw
Ivan Toni
Peter Uhrig
Anna Wilson
Judith Holler
Asli Ozyurek
Raquel Fernández
    SLR
ArXivPDFHTML

Papers citing "Leveraging Speech for Gesture Detection in Multimodal Communication"

4 / 4 papers shown
Title
Is Cross-Attention Preferable to Self-Attention for Multi-Modal Emotion
  Recognition?
Is Cross-Attention Preferable to Self-Attention for Multi-Modal Emotion Recognition?
Vandana Rajan
A. Brutti
Andrea Cavallaro
29
33
0
18 Feb 2022
Multimodal analysis of the predictability of hand-gesture properties
Multimodal analysis of the predictability of hand-gesture properties
Taras Kucherenko
Rajmund Nagy
Michael Neff
Hedvig Kjellström
G. Henter
22
22
0
12 Aug 2021
Transformers in Vision: A Survey
Transformers in Vision: A Survey
Salman Khan
Muzammal Naseer
Munawar Hayat
Syed Waqas Zamir
F. Khan
M. Shah
ViT
225
2,404
0
04 Jan 2021
Stronger, Faster and More Explainable: A Graph Convolutional Baseline
  for Skeleton-based Action Recognition
Stronger, Faster and More Explainable: A Graph Convolutional Baseline for Skeleton-based Action Recognition
Yisheng Song
Zhang Zhang
Caifeng Shan
Liang Wang
117
287
0
20 Oct 2020
1