ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2104.14403
  4. Cited By
Do Feature Attribution Methods Correctly Attribute Features?

Do Feature Attribution Methods Correctly Attribute Features?

27 April 2021
Yilun Zhou
Serena Booth
Marco Tulio Ribeiro
J. Shah
    FAtt
    XAI
ArXivPDFHTML

Papers citing "Do Feature Attribution Methods Correctly Attribute Features?"

30 / 80 papers shown
Title
Tracr: Compiled Transformers as a Laboratory for Interpretability
Tracr: Compiled Transformers as a Laboratory for Interpretability
David Lindner
János Kramár
Sebastian Farquhar
Matthew Rahtz
Tom McGrath
Vladimir Mikulik
16
70
0
12 Jan 2023
Post hoc Explanations may be Ineffective for Detecting Unknown Spurious
  Correlation
Post hoc Explanations may be Ineffective for Detecting Unknown Spurious Correlation
Julius Adebayo
M. Muelly
H. Abelson
Been Kim
11
86
0
09 Dec 2022
Explainer Divergence Scores (EDS): Some Post-Hoc Explanations May be
  Effective for Detecting Unknown Spurious Correlations
Explainer Divergence Scores (EDS): Some Post-Hoc Explanations May be Effective for Detecting Unknown Spurious Correlations
Shea Cardozo
Gabriel Islas Montero
Dmitry Kazhdan
B. Dimanov
Maleakhi A. Wijaya
M. Jamnik
Pietro Lio'
AAML
11
0
0
14 Nov 2022
Towards Faithful Model Explanation in NLP: A Survey
Towards Faithful Model Explanation in NLP: A Survey
Qing Lyu
Marianna Apidianaki
Chris Callison-Burch
XAI
104
107
0
22 Sep 2022
Interpretable (not just posthoc-explainable) medical claims modeling for
  discharge placement to prevent avoidable all-cause readmissions or death
Interpretable (not just posthoc-explainable) medical claims modeling for discharge placement to prevent avoidable all-cause readmissions or death
Joshua C. Chang
Ted L. Chang
Carson C. Chow
R. Mahajan
Sonya Mahajan
Joe Maisog
Shashaank Vattikuti
Hongjing Xia
FAtt
OOD
13
0
0
28 Aug 2022
Auditing Visualizations: Transparency Methods Struggle to Detect
  Anomalous Behavior
Auditing Visualizations: Transparency Methods Struggle to Detect Anomalous Behavior
Jean-Stanislas Denain
Jacob Steinhardt
AAML
8
7
0
27 Jun 2022
Robustness of Explanation Methods for NLP Models
Robustness of Explanation Methods for NLP Models
Shriya Atmakuri
Tejas Chheda
Dinesh Kandula
Nishant Yadav
Taesung Lee
Hessel Tuinhof
FAtt
AAML
11
4
0
24 Jun 2022
Learning to Estimate Shapley Values with Vision Transformers
Learning to Estimate Shapley Values with Vision Transformers
Ian Covert
Chanwoo Kim
Su-In Lee
FAtt
17
34
0
10 Jun 2022
Order-sensitive Shapley Values for Evaluating Conceptual Soundness of
  NLP Models
Order-sensitive Shapley Values for Evaluating Conceptual Soundness of NLP Models
Kaiji Lu
Anupam Datta
13
0
0
01 Jun 2022
The Solvability of Interpretability Evaluation Metrics
The Solvability of Interpretability Evaluation Metrics
Yilun Zhou
J. Shah
62
8
0
18 May 2022
ExSum: From Local Explanations to Model Understanding
ExSum: From Local Explanations to Model Understanding
Yilun Zhou
Marco Tulio Ribeiro
J. Shah
FAtt
LRM
8
25
0
30 Apr 2022
Can Rationalization Improve Robustness?
Can Rationalization Improve Robustness?
Howard Chen
Jacqueline He
Karthik Narasimhan
Danqi Chen
AAML
6
40
0
25 Apr 2022
Missingness Bias in Model Debugging
Missingness Bias in Model Debugging
Saachi Jain
Hadi Salman
E. Wong
Pengchuan Zhang
Vibhav Vineet
Sai H. Vemprala
A. Madry
20
37
0
19 Apr 2022
Guidelines and Evaluation of Clinical Explainable AI in Medical Image
  Analysis
Guidelines and Evaluation of Clinical Explainable AI in Medical Image Analysis
Weina Jin
Xiaoxiao Li
M. Fatehi
Ghassan Hamarneh
ELM
XAI
42
88
0
16 Feb 2022
Interpretable pipelines with evolutionarily optimized modules for RL
  tasks with visual inputs
Interpretable pipelines with evolutionarily optimized modules for RL tasks with visual inputs
Leonardo Lucio Custode
Giovanni Iacca
6
13
0
10 Feb 2022
Investigating the fidelity of explainable artificial intelligence
  methods for applications of convolutional neural networks in geoscience
Investigating the fidelity of explainable artificial intelligence methods for applications of convolutional neural networks in geoscience
Antonios Mamalakis
E. Barnes
I. Ebert‐Uphoff
11
73
0
07 Feb 2022
Visualizing Automatic Speech Recognition -- Means for a Better
  Understanding?
Visualizing Automatic Speech Recognition -- Means for a Better Understanding?
Karla Markert
Romain Parracone
Mykhailo Kulakov
Philip Sperl
Ching-yu Kao
Konstantin Böttinger
11
8
0
01 Feb 2022
Diagnosing AI Explanation Methods with Folk Concepts of Behavior
Diagnosing AI Explanation Methods with Folk Concepts of Behavior
Alon Jacovi
Jasmijn Bastings
Sebastian Gehrmann
Yoav Goldberg
Katja Filippova
21
15
0
27 Jan 2022
A Comprehensive Study of Image Classification Model Sensitivity to
  Foregrounds, Backgrounds, and Visual Attributes
A Comprehensive Study of Image Classification Model Sensitivity to Foregrounds, Backgrounds, and Visual Attributes
Mazda Moayeri
Phillip E. Pope
Yogesh Balaji
S. Feizi
VLM
19
52
0
26 Jan 2022
"Will You Find These Shortcuts?" A Protocol for Evaluating the
  Faithfulness of Input Salience Methods for Text Classification
"Will You Find These Shortcuts?" A Protocol for Evaluating the Faithfulness of Input Salience Methods for Text Classification
Jasmijn Bastings
Sebastian Ebert
Polina Zablotskaia
Anders Sandholm
Katja Filippova
107
75
0
14 Nov 2021
Revisiting Sanity Checks for Saliency Maps
Revisiting Sanity Checks for Saliency Maps
G. Yona
D. Greenfeld
AAML
FAtt
11
25
0
27 Oct 2021
The Irrationality of Neural Rationale Models
The Irrationality of Neural Rationale Models
Yiming Zheng
Serena Booth
J. Shah
Yilun Zhou
13
16
0
14 Oct 2021
Combining Feature and Instance Attribution to Detect Artifacts
Combining Feature and Instance Attribution to Detect Artifacts
Pouya Pezeshkpour
Sarthak Jain
Sameer Singh
Byron C. Wallace
TDI
11
43
0
01 Jul 2021
How Well do Feature Visualizations Support Causal Understanding of CNN
  Activations?
How Well do Feature Visualizations Support Causal Understanding of CNN Activations?
Roland S. Zimmermann
Judy Borowski
Robert Geirhos
Matthias Bethge
Thomas S. A. Wallis
Wieland Brendel
FAtt
29
31
0
23 Jun 2021
A Framework for Evaluating Post Hoc Feature-Additive Explainers
A Framework for Evaluating Post Hoc Feature-Additive Explainers
Zachariah Carmichael
Walter J. Scheirer
FAtt
33
4
0
15 Jun 2021
The Out-of-Distribution Problem in Explainability and Search Methods for
  Feature Importance Explanations
The Out-of-Distribution Problem in Explainability and Search Methods for Feature Importance Explanations
Peter Hase
Harry Xie
Mohit Bansal
OODD
LRM
FAtt
13
91
0
01 Jun 2021
Sanity Simulations for Saliency Methods
Sanity Simulations for Saliency Methods
Joon Sik Kim
Gregory Plumb
Ameet Talwalkar
FAtt
30
17
0
13 May 2021
Benchmarking Perturbation-based Saliency Maps for Explaining Atari
  Agents
Benchmarking Perturbation-based Saliency Maps for Explaining Atari Agents
Tobias Huber
Benedikt Limmer
Elisabeth André
FAtt
12
14
0
18 Jan 2021
RoCUS: Robot Controller Understanding via Sampling
RoCUS: Robot Controller Understanding via Sampling
Yilun Zhou
Serena Booth
Nadia Figueroa
J. Shah
12
13
0
25 Dec 2020
Learning Attitudes and Attributes from Multi-Aspect Reviews
Learning Attitudes and Attributes from Multi-Aspect Reviews
Julian McAuley
J. Leskovec
Dan Jurafsky
193
296
0
15 Oct 2012
Previous
12