Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2203.06877
Cited By
Rethinking Stability for Attribution-based Explanations
14 March 2022
Chirag Agarwal
Nari Johnson
Martin Pawelczyk
Satyapriya Krishna
Eshika Saxena
Marinka Zitnik
Himabindu Lakkaraju
FAtt
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Rethinking Stability for Attribution-based Explanations"
10 / 10 papers shown
Title
Navigating the Maze of Explainable AI: A Systematic Approach to Evaluating Methods and Metrics
Lukas Klein
Carsten T. Lüth
U. Schlegel
Till J. Bungert
Mennatallah El-Assady
Paul F. Jäger
XAI
ELM
34
2
0
03 Jan 2025
A Fresh Look at Sanity Checks for Saliency Maps
Anna Hedström
Leander Weber
Sebastian Lapuschkin
Marina M.-C. Höhne
FAtt
LRM
37
5
0
03 May 2024
Stability of Explainable Recommendation
Sairamvinay Vijayaraghavan
Prasant Mohapatra
AAML
38
1
0
03 May 2024
T-Explainer: A Model-Agnostic Explainability Framework Based on Gradients
Evandro S. Ortigossa
Fábio F. Dias
Brian Barr
Claudio T. Silva
L. G. Nonato
FAtt
54
2
0
25 Apr 2024
Confident Feature Ranking
Bitya Neuhof
Y. Benjamini
FAtt
11
3
0
28 Jul 2023
Rectifying Group Irregularities in Explanations for Distribution Shift
Adam Stein
Yinjun Wu
Eric Wong
Mayur Naik
27
1
0
25 May 2023
What Makes a Good Explanation?: A Harmonized View of Properties of Explanations
Zixi Chen
Varshini Subhash
Marton Havasi
Weiwei Pan
Finale Doshi-Velez
XAI
FAtt
27
18
0
10 Nov 2022
On the Robustness of Explanations of Deep Neural Network Models: A Survey
Amlan Jyoti
Karthik Balaji Ganesh
Manoj Gayala
Nandita Lakshmi Tunuguntla
Sandesh Kamath
V. Balasubramanian
XAI
FAtt
AAML
32
4
0
09 Nov 2022
Use-Case-Grounded Simulations for Explanation Evaluation
Valerie Chen
Nari Johnson
Nicholay Topin
Gregory Plumb
Ameet Talwalkar
FAtt
ELM
20
24
0
05 Jun 2022
From Human Explanation to Model Interpretability: A Framework Based on Weight of Evidence
David Alvarez-Melis
Harmanpreet Kaur
Hal Daumé
Hanna M. Wallach
Jennifer Wortman Vaughan
FAtt
46
27
0
27 Apr 2021
1