ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2109.13116
  4. Cited By
VQA-MHUG: A Gaze Dataset to Study Multimodal Neural Attention in Visual
  Question Answering

VQA-MHUG: A Gaze Dataset to Study Multimodal Neural Attention in Visual Question Answering

27 September 2021
Ekta Sood
Fabian Kögel
Florian Strohm
Prajit Dhar
Andreas Bulling
ArXivPDFHTML

Papers citing "VQA-MHUG: A Gaze Dataset to Study Multimodal Neural Attention in Visual Question Answering"

3 / 3 papers shown
Title
VISTA: A Visual and Textual Attention Dataset for Interpreting
  Multimodal Models
VISTA: A Visual and Textual Attention Dataset for Interpreting Multimodal Models
Harshit
Tolga Tasdizen
CoGe
VLM
17
1
0
06 Oct 2024
The Copenhagen Corpus of Eye Tracking Recordings from Natural Reading of
  Danish Texts
The Copenhagen Corpus of Eye Tracking Recordings from Natural Reading of Danish Texts
Nora Hollenstein
Maria Barrett
Marina Bjornsdóttir
15
14
0
28 Apr 2022
Aggregated Residual Transformations for Deep Neural Networks
Aggregated Residual Transformations for Deep Neural Networks
Saining Xie
Ross B. Girshick
Piotr Dollár
Z. Tu
Kaiming He
261
10,106
0
16 Nov 2016
1