Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1808.03894
Cited By
Interpreting Recurrent and Attention-Based Neural Models: a Case Study on Natural Language Inference
12 August 2018
Reza Ghaeini
Xiaoli Z. Fern
Prasad Tadepalli
MILM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Interpreting Recurrent and Attention-Based Neural Models: a Case Study on Natural Language Inference"
19 / 19 papers shown
Title
Hierarchical Attention Network for Interpretable ECG-based Heart Disease Classification
Mario Padilla Rodriguez
Mohamed Nafea
28
0
0
25 Mar 2025
Fake News Detection After LLM Laundering: Measurement and Explanation
Rupak Kumar Das
Jonathan Dodge
87
0
0
29 Jan 2025
A Study of the Plausibility of Attention between RNN Encoders in Natural Language Inference
Duc Hau Nguyen
Duc Hau Nguyen
Pascale Sébillot
47
5
0
23 Jan 2025
Towards Reconciling Usability and Usefulness of Explainable AI Methodologies
Pradyumna Tambwekar
Matthew C. Gombolay
28
8
0
13 Jan 2023
Deconfounding Legal Judgment Prediction for European Court of Human Rights Cases Towards Better Alignment with Experts
Santosh T.Y.S.S
Shanshan Xu
O. Ichim
Matthias Grabmair
23
26
0
25 Oct 2022
Visual correspondence-based explanations improve AI robustness and human-AI team accuracy
Giang Nguyen
Mohammad Reza Taesiri
Anh Totti Nguyen
30
42
0
26 Jul 2022
Fooling Explanations in Text Classifiers
Adam Ivankay
Ivan Girardi
Chiara Marchiori
P. Frossard
AAML
22
20
0
07 Jun 2022
Controlling the Focus of Pretrained Language Generation Models
Jiabao Ji
Yoon Kim
James R. Glass
Tianxing He
27
5
0
02 Mar 2022
An empirical user-study of text-based nonverbal annotation systems for human-human conversations
Joshua Y. Kim
K. Yacef
14
1
0
30 Dec 2021
Interpreting Deep Learning Models in Natural Language Processing: A Review
Xiaofei Sun
Diyi Yang
Xiaoya Li
Tianwei Zhang
Yuxian Meng
Han Qiu
Guoyin Wang
Eduard H. Hovy
Jiwei Li
17
44
0
20 Oct 2021
Improving the Faithfulness of Attention-based Explanations with Task-specific Information for Text Classification
G. Chrysostomou
Nikolaos Aletras
16
37
0
06 May 2021
Self-Explaining Structures Improve NLP Models
Zijun Sun
Chun Fan
Qinghong Han
Xiaofei Sun
Yuxian Meng
Fei Wu
Jiwei Li
MILM
XAI
LRM
FAtt
36
38
0
03 Dec 2020
Does the Whole Exceed its Parts? The Effect of AI Explanations on Complementary Team Performance
Gagan Bansal
Tongshuang Wu
Joyce Zhou
Raymond Fok
Besmira Nushi
Ece Kamar
Marco Tulio Ribeiro
Daniel S. Weld
23
577
0
26 Jun 2020
Generating Hierarchical Explanations on Text Classification via Feature Interaction Detection
Hanjie Chen
Guangtao Zheng
Yangfeng Ji
FAtt
28
91
0
04 Apr 2020
Explaining Explanations: Axiomatic Feature Interactions for Deep Networks
Joseph D. Janizek
Pascal Sturmfels
Su-In Lee
FAtt
27
143
0
10 Feb 2020
On Model Stability as a Function of Random Seed
Pranava Madhyastha
Dhruv Batra
33
61
0
23 Sep 2019
Understanding Memory Modules on Learning Simple Algorithms
Kexin Wang
Yu Zhou
Shaonan Wang
Jiajun Zhang
Chengqing Zong
31
0
0
01 Jul 2019
Saliency Learning: Teaching the Model Where to Pay Attention
Reza Ghaeini
Xiaoli Z. Fern
Hamed Shahbazi
Prasad Tadepalli
FAtt
XAI
21
30
0
22 Feb 2019
A Decomposable Attention Model for Natural Language Inference
Ankur P. Parikh
Oscar Täckström
Dipanjan Das
Jakob Uszkoreit
201
1,367
0
06 Jun 2016
1