ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2010.06396
  4. Cited By
Interpreting Attention Models with Human Visual Attention in Machine
  Reading Comprehension

Interpreting Attention Models with Human Visual Attention in Machine Reading Comprehension

13 October 2020
Ekta Sood
Simon Tannert
Diego Frassinelli
Andreas Bulling
Ngoc Thang Vu
    HAI
ArXivPDFHTML

Papers citing "Interpreting Attention Models with Human Visual Attention in Machine Reading Comprehension"

8 / 8 papers shown
Title
OASST-ETC Dataset: Alignment Signals from Eye-tracking Analysis of LLM Responses
OASST-ETC Dataset: Alignment Signals from Eye-tracking Analysis of LLM Responses
Angela Lopez-Cardona
Sebastian Idesis
Miguel Barreda-Ángeles
Sergi Abadal
Ioannis Arapakis
46
0
0
13 Mar 2025
Seeing Eye to AI: Human Alignment via Gaze-Based Response Rewards for Large Language Models
Seeing Eye to AI: Human Alignment via Gaze-Based Response Rewards for Large Language Models
Angela Lopez-Cardona
Carlos Segura
Alexandros Karatzoglou
Sergi Abadal
Ioannis Arapakis
ALM
54
2
0
02 Oct 2024
A Comparative Study on Textual Saliency of Styles from Eye Tracking,
  Annotations, and Language Models
A Comparative Study on Textual Saliency of Styles from Eye Tracking, Annotations, and Language Models
Karin de Langis
Dongyeop Kang
16
1
0
19 Dec 2022
The Copenhagen Corpus of Eye Tracking Recordings from Natural Reading of
  Danish Texts
The Copenhagen Corpus of Eye Tracking Recordings from Natural Reading of Danish Texts
Nora Hollenstein
Maria Barrett
Marina Bjornsdóttir
20
14
0
28 Apr 2022
Multimodal Integration of Human-Like Attention in Visual Question
  Answering
Multimodal Integration of Human-Like Attention in Visual Question Answering
Ekta Sood
Fabian Kögel
Philippe Muller
Dominike Thomas
Mihai Bâce
Andreas Bulling
33
16
0
27 Sep 2021
VQA-MHUG: A Gaze Dataset to Study Multimodal Neural Attention in Visual
  Question Answering
VQA-MHUG: A Gaze Dataset to Study Multimodal Neural Attention in Visual Question Answering
Ekta Sood
Fabian Kögel
Florian Strohm
Prajit Dhar
Andreas Bulling
29
19
0
27 Sep 2021
CogAlign: Learning to Align Textual Neural Representations to Cognitive
  Language Processing Signals
CogAlign: Learning to Align Textual Neural Representations to Cognitive Language Processing Signals
Yuqi Ren
Deyi Xiong
14
16
0
10 Jun 2021
Effective Approaches to Attention-based Neural Machine Translation
Effective Approaches to Attention-based Neural Machine Translation
Thang Luong
Hieu H. Pham
Christopher D. Manning
216
7,924
0
17 Aug 2015
1