Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2105.02657
Cited By
Improving the Faithfulness of Attention-based Explanations with Task-specific Information for Text Classification
6 May 2021
G. Chrysostomou
Nikolaos Aletras
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Improving the Faithfulness of Attention-based Explanations with Task-specific Information for Text Classification"
23 / 23 papers shown
Title
From Critique to Clarity: A Pathway to Faithful and Personalized Code Explanations with Large Language Models
Zexing Xu
Zhuang Luo
Yichuan Li
Kyumin Lee
S. Rasoul Etesami
38
0
0
28 Jan 2025
On Explaining with Attention Matrices
Omar Naim
Nicholas Asher
29
1
0
24 Oct 2024
PromptExp: Multi-granularity Prompt Explanation of Large Language Models
Ximing Dong
Shaowei Wang
Dayi Lin
Gopi Krishnan Rajbahadur
Boquan Zhou
Shichao Liu
Ahmed E. Hassan
AAML
LRM
25
1
0
16 Oct 2024
Continuous Risk Prediction
Yi Dai
15
1
0
12 Oct 2024
Noise-Free Explanation for Driving Action Prediction
Hongbo Zhu
Theodor Wulff
R. S. Maharjan
Jinpei Han
Angelo Cangelosi
AAML
FAtt
25
0
0
08 Jul 2024
Towards a Framework for Evaluating Explanations in Automated Fact Verification
Neema Kotonya
Francesca Toni
32
5
0
29 Mar 2024
Plausible Extractive Rationalization through Semi-Supervised Entailment Signal
Yeo Wei Jie
Ranjan Satapathy
Erik Cambria
19
5
0
13 Feb 2024
Towards Faithful Explanations for Text Classification with Robustness Improvement and Explanation Guided Training
Dongfang Li
Baotian Hu
Qingcai Chen
Shan He
26
4
0
29 Dec 2023
Can Large Language Models Explain Themselves? A Study of LLM-Generated Self-Explanations
Shiyuan Huang
Siddarth Mamidanna
Shreedhar Jangam
Yilun Zhou
Leilani H. Gilpin
LRM
MILM
ELM
35
66
0
17 Oct 2023
Evaluating Explanation Methods for Vision-and-Language Navigation
Guanqi Chen
Lei Yang
Guanhua Chen
Jia Pan
XAI
21
0
0
10 Oct 2023
SPADE: Sparsity-Guided Debugging for Deep Neural Networks
Arshia Soltani Moakhar
Eugenia Iofinova
Elias Frantar
Dan Alistarh
32
1
0
06 Oct 2023
Explainability for Large Language Models: A Survey
Haiyan Zhao
Hanjie Chen
Fan Yang
Ninghao Liu
Huiqi Deng
Hengyi Cai
Shuaiqiang Wang
Dawei Yin
Mengnan Du
LRM
23
408
0
02 Sep 2023
Incorporating Attribution Importance for Improving Faithfulness Metrics
Zhixue Zhao
Nikolaos Aletras
16
13
0
17 May 2023
Truthful Meta-Explanations for Local Interpretability of Machine Learning Models
Ioannis Mollas
Nick Bassiliades
Grigorios Tsoumakas
16
3
0
07 Dec 2022
On the Impact of Temporal Concept Drift on Model Explanations
Zhixue Zhao
G. Chrysostomou
Kalina Bontcheva
Nikolaos Aletras
18
15
0
17 Oct 2022
Domain Classification-based Source-specific Term Penalization for Domain Adaptation in Hate-speech Detection
Tulika Bose
Nikolaos Aletras
Irina Illina
Dominique Fohr
11
0
0
18 Sep 2022
The Solvability of Interpretability Evaluation Metrics
Yilun Zhou
J. Shah
62
8
0
18 May 2022
Identifying and Characterizing Active Citizens who Refute Misinformation in Social Media
Yida Mu
Pu Niu
Nikolaos Aletras
26
12
0
21 Apr 2022
A Comparative Study of Faithfulness Metrics for Model Interpretability Methods
Chun Sik Chan
Huanqi Kong
Guanqing Liang
9
50
0
12 Apr 2022
Rethinking Attention-Model Explainability through Faithfulness Violation Test
Y. Liu
Haoliang Li
Yangyang Guo
Chen Kong
Jing Li
Shiqi Wang
FAtt
118
42
0
28 Jan 2022
Enjoy the Salience: Towards Better Transformer-based Faithful Explanations with Word Salience
G. Chrysostomou
Nikolaos Aletras
26
16
0
31 Aug 2021
Flexible Instance-Specific Rationalization of NLP Models
G. Chrysostomou
Nikolaos Aletras
26
14
0
16 Apr 2021
Local Interpretations for Explainable Natural Language Processing: A Survey
Siwen Luo
Hamish Ivison
S. Han
Josiah Poon
MILM
33
48
0
20 Mar 2021
1