ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2006.05656
  4. Cited By
Why Attentions May Not Be Interpretable?
v1v2v3v4 (latest)

Why Attentions May Not Be Interpretable?

Knowledge Discovery and Data Mining (KDD), 2020
10 June 2020
Bing Bai
Jian Liang
Guanhua Zhang
Hao Li
Kun Bai
Haiwei Yang
    FAtt
ArXiv (abs)PDFHTML

Papers citing "Why Attentions May Not Be Interpretable?"

32 / 32 papers shown
Title
PruneGCRN: Minimizing and explaining spatio-temporal problems through node pruning
PruneGCRN: Minimizing and explaining spatio-temporal problems through node pruning
Javier García-Sigüenza
Mirco Nanni
Faraón Llorens-Largo
José F. Vicent
40
0
0
12 Oct 2025
Deep Learning to Identify the Spatio-Temporal Cascading Effects of Train Delays in a High-Density Network
Deep Learning to Identify the Spatio-Temporal Cascading Effects of Train Delays in a High-Density Network
Vu Duc Anh Nguyen
Ziyue Li
34
0
0
10 Oct 2025
Understanding Sensitivity of Differential Attention through the Lens of Adversarial Robustness
Understanding Sensitivity of Differential Attention through the Lens of Adversarial Robustness
Tsubasa Takahashi
Shojiro Yamabe
Futa Waseda
Kento Sasaki
AAML
84
0
0
01 Oct 2025
Unsupervised Candidate Ranking for Lexical Substitution via Holistic Sentence Semantics
Unsupervised Candidate Ranking for Lexical Substitution via Holistic Sentence Semantics
Zhongyang Hu
Naijie Gu
Xiangzhi Tao
Tianhui Gu
Yibing Zhou
60
0
0
15 Sep 2025
Can Hessian-Based Insights Support Fault Diagnosis in Attention-based Models?
Can Hessian-Based Insights Support Fault Diagnosis in Attention-based Models?
Sigma Jahan
Mohammad Masudur Rahman
104
0
0
09 Jun 2025
Short-circuiting Shortcuts: Mechanistic Investigation of Shortcuts in Text Classification
Short-circuiting Shortcuts: Mechanistic Investigation of Shortcuts in Text Classification
Leon Eshuijs
Shihan Wang
Antske Fokkens
359
0
0
09 May 2025
Hierarchical Attention Network for Interpretable ECG-based Heart Disease Classification
Hierarchical Attention Network for Interpretable ECG-based Heart Disease Classification
Mario Padilla Rodriguez
Mohamed Nafea
138
3
0
25 Mar 2025
Evaluating Visual Explanations of Attention Maps for Transformer-based Medical Imaging
Minjae Chung
Jong Bum Won
Ganghyun Kim
Yujin Kim
Utku Ozbulak
MedIm
356
5
0
12 Mar 2025
Regularization, Semi-supervision, and Supervision for a Plausible Attention-Based Explanation
Regularization, Semi-supervision, and Supervision for a Plausible Attention-Based ExplanationInternational Conference on Applications of Natural Language to Data Bases (NLDB), 2025
Duc Hau Nguyen
Cyrielle Mallart
Guillaume Gravier
Pascale Sébillot
221
1
0
22 Jan 2025
Continuous Risk Prediction
Continuous Risk Prediction
Yi Dai
128
0
0
12 Oct 2024
Towards Understanding Sensitive and Decisive Patterns in Explainable AI:
  A Case Study of Model Interpretation in Geometric Deep Learning
Towards Understanding Sensitive and Decisive Patterns in Explainable AI: A Case Study of Model Interpretation in Geometric Deep Learning
Jiajun Zhu
Siqi Miao
Rex Ying
Pan Li
189
2
0
30 Jun 2024
Towards Trustworthy AI: A Review of Ethical and Robust Large Language
  Models
Towards Trustworthy AI: A Review of Ethical and Robust Large Language Models
Meftahul Ferdaus
Mahdi Abdelguerfi
Elias Ioup
Kendall N. Niles
Ken Pathak
Steve Sloan
290
23
0
01 Jun 2024
Infinite-Dimensional Feature Interaction
Infinite-Dimensional Feature Interaction
Chenhui Xu
Fuxun Yu
Maoliang Li
Zihao Zheng
Zirui Xu
Jinjun Xiong
Xiang Chen
273
1
0
22 May 2024
Analyzing Semantic Change through Lexical Replacements
Analyzing Semantic Change through Lexical Replacements
Francesco Periti
Pierluigi Cassotti
Haim Dubossarsky
Nina Tahmasebi
161
13
0
29 Apr 2024
Does Faithfulness Conflict with Plausibility? An Empirical Study in
  Explainable AI across NLP Tasks
Does Faithfulness Conflict with Plausibility? An Empirical Study in Explainable AI across NLP Tasks
Xiaolei Lu
Jianghong Ma
156
4
0
29 Mar 2024
Plausible Extractive Rationalization through Semi-Supervised Entailment
  Signal
Plausible Extractive Rationalization through Semi-Supervised Entailment Signal
Yeo Wei Jie
Frank Xing
Xiaoshi Zhong
268
8
0
13 Feb 2024
Neuron-Level Knowledge Attribution in Large Language Models
Neuron-Level Knowledge Attribution in Large Language Models
Zeping Yu
Sophia Ananiadou
FAttKELM
226
27
0
19 Dec 2023
CGS-Mask: Making Time Series Predictions Intuitive for All
CGS-Mask: Making Time Series Predictions Intuitive for AllAAAI Conference on Artificial Intelligence (AAAI), 2023
Feng Lu
Wei Li
Yifei Sun
Cheng Song
Yufei Ren
Albert Y. Zomaya
AI4TS
190
1
0
15 Dec 2023
Interpreting and Exploiting Functional Specialization in Multi-Head
  Attention under Multi-task Learning
Interpreting and Exploiting Functional Specialization in Multi-Head Attention under Multi-task LearningConference on Empirical Methods in Natural Language Processing (EMNLP), 2023
Chong Li
Shaonan Wang
Yunhao Zhang
Jiajun Zhang
Chengqing Zong
176
7
0
16 Oct 2023
Evaluating Explanation Methods for Vision-and-Language Navigation
Evaluating Explanation Methods for Vision-and-Language NavigationEuropean Conference on Artificial Intelligence (ECAI), 2023
Guanqi Chen
Lei Yang
Guanhua Chen
Jia Pan
XAI
175
1
0
10 Oct 2023
Insights Into the Inner Workings of Transformer Models for Protein
  Function Prediction
Insights Into the Inner Workings of Transformer Models for Protein Function PredictionBioinformatics (Bioinformatics), 2023
M. Wenzel
Erik Grüner
Nils Strodthoff
ViT
175
13
0
07 Sep 2023
Explainability for Large Language Models: A Survey
Explainability for Large Language Models: A SurveyACM Transactions on Intelligent Systems and Technology (ACM TIST), 2023
Haiyan Zhao
Hanjie Chen
Fan Yang
Ninghao Liu
Huiqi Deng
Hengyi Cai
Shuaiqiang Wang
D. Yin
Jundong Li
LRM
353
675
0
02 Sep 2023
A Novel Convolutional Neural Network Architecture with a Continuous
  Symmetry
A Novel Convolutional Neural Network Architecture with a Continuous SymmetryCAAI International Conference on Artificial Intelligence (ICCAI), 2023
Y. Liu
Han-Juan Shao
Bing Bai
AI4CE
239
3
0
03 Aug 2023
Transformers in Reinforcement Learning: A Survey
Transformers in Reinforcement Learning: A Survey
Pranav Agarwal
A. Rahman
P. St-Charles
Simon J. D. Prince
Samira Ebrahimi Kahou
OffRL
204
27
0
12 Jul 2023
Learning to Select Prototypical Parts for Interpretable Sequential Data
  Modeling
Learning to Select Prototypical Parts for Interpretable Sequential Data ModelingAAAI Conference on Artificial Intelligence (AAAI), 2022
Yifei Zhang
Nengneng Gao
Cunqing Ma
147
7
0
07 Dec 2022
ClassActionPrediction: A Challenging Benchmark for Legal Judgment
  Prediction of Class Action Cases in the US
ClassActionPrediction: A Challenging Benchmark for Legal Judgment Prediction of Class Action Cases in the US
Gil Semo
Dor Bernsohn
Ben Hagag
Gila Hayat
Joel Niklaus
AILawELM
240
25
0
01 Nov 2022
Interpretable Geometric Deep Learning via Learnable Randomness Injection
Interpretable Geometric Deep Learning via Learnable Randomness InjectionInternational Conference on Learning Representations (ICLR), 2022
Siqi Miao
Yunan Luo
Miaoyuan Liu
Pan Li
156
35
0
30 Oct 2022
TestAug: A Framework for Augmenting Capability-based NLP Tests
TestAug: A Framework for Augmenting Capability-based NLP TestsInternational Conference on Computational Linguistics (COLING), 2022
Guanqun Yang
Mirazul Haque
Qiaochu Song
Wei Yang
Xueqing Liu
ELM
136
0
0
14 Oct 2022
Continuous Diagnosis and Prognosis by Controlling the Update Process of
  Deep Neural Networks
Continuous Diagnosis and Prognosis by Controlling the Update Process of Deep Neural NetworksPatterns (Patterns), 2022
Chenxi Sun
Hongyan Li
Moxian Song
D. Cai
B. Zhang
linda Qiao
144
10
0
06 Oct 2022
Interpretable Graph Neural Networks for Connectome-Based Brain Disorder
  Analysis
Interpretable Graph Neural Networks for Connectome-Based Brain Disorder AnalysisInternational Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI), 2022
Hejie Cui
Wei Dai
Yanqiao Zhu
Xiaoxiao Li
Lifang He
Carl Yang
238
101
0
30 Jun 2022
Rethinking Attention-Model Explainability through Faithfulness Violation
  Test
Rethinking Attention-Model Explainability through Faithfulness Violation TestInternational Conference on Machine Learning (ICML), 2022
Zichen Liu
Haoliang Li
Yangyang Guo
Chen Kong
Jing Li
Shiqi Wang
FAtt
272
54
0
28 Jan 2022
Local Interpretations for Explainable Natural Language Processing: A
  Survey
Local Interpretations for Explainable Natural Language Processing: A SurveyACM Computing Surveys (CSUR), 2021
Siwen Luo
Michal Guerquin
S. Han
Josiah Poon
MILM
312
59
0
20 Mar 2021
1