ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2104.14403
  4. Cited By
Do Feature Attribution Methods Correctly Attribute Features?

Do Feature Attribution Methods Correctly Attribute Features?

27 April 2021
Yilun Zhou
Serena Booth
Marco Tulio Ribeiro
J. Shah
    FAtt
    XAI
ArXivPDFHTML

Papers citing "Do Feature Attribution Methods Correctly Attribute Features?"

50 / 80 papers shown
Title
Explanations Go Linear: Interpretable and Individual Latent Encoding for Post-hoc Explainability
Explanations Go Linear: Interpretable and Individual Latent Encoding for Post-hoc Explainability
Simone Piaggesi
Riccardo Guidotti
F. Giannotti
D. Pedreschi
FAtt
LRM
65
0
0
29 Apr 2025
Probabilistic Stability Guarantees for Feature Attributions
Probabilistic Stability Guarantees for Feature Attributions
Helen Jin
Anton Xue
Weiqiu You
Surbhi Goel
Eric Wong
19
0
0
18 Apr 2025
Fourier Feature Attribution: A New Efficiency Attribution Method
Fourier Feature Attribution: A New Efficiency Attribution Method
Zechen Liu
Feiyang Zhang
Wei Song
X. Li
Wei Wei
FAtt
57
0
0
02 Apr 2025
From Abstract to Actionable: Pairwise Shapley Values for Explainable AI
From Abstract to Actionable: Pairwise Shapley Values for Explainable AI
Jiaxin Xu
Hung Chau
Angela Burden
TDI
46
0
0
18 Feb 2025
B-cosification: Transforming Deep Neural Networks to be Inherently Interpretable
B-cosification: Transforming Deep Neural Networks to be Inherently Interpretable
Shreyash Arya
Sukrut Rao
Moritz Bohle
Bernt Schiele
68
2
0
28 Jan 2025
Lost in Context: The Influence of Context on Feature Attribution Methods
  for Object Recognition
Lost in Context: The Influence of Context on Feature Attribution Methods for Object Recognition
Sayanta Adhikari
Rishav Kumar
Konda Reddy Mopuri
Rajalakshmi Pachamuthu
26
0
0
05 Nov 2024
Explanations that reveal all through the definition of encoding
Explanations that reveal all through the definition of encoding
A. Puli
Nhi Nguyen
Rajesh Ranganath
FAtt
XAI
31
1
0
04 Nov 2024
Feature Responsiveness Scores: Model-Agnostic Explanations for Recourse
Feature Responsiveness Scores: Model-Agnostic Explanations for Recourse
Seung Hyun Cheon
Anneke Wernerfelt
Sorelle A. Friedler
Berk Ustun
FaML
FAtt
30
0
0
29 Oct 2024
Explainable AI needs formal notions of explanation correctness
Explainable AI needs formal notions of explanation correctness
Stefan Haufe
Rick Wilming
Benedict Clark
Rustam Zhumagambetov
Danny Panknin
Ahcène Boubekki
XAI
26
0
0
22 Sep 2024
Using Part-based Representations for Explainable Deep Reinforcement
  Learning
Using Part-based Representations for Explainable Deep Reinforcement Learning
Manos Kirtas
Konstantinos Tsampazis
Loukia Avramelou
Nikolaos Passalis
Anastasios Tefas
20
0
0
21 Aug 2024
On the Evaluation Consistency of Attribution-based Explanations
On the Evaluation Consistency of Attribution-based Explanations
Jiarui Duan
Haoling Li
Haofei Zhang
Hao Jiang
Mengqi Xue
Li Sun
Mingli Song
Jie Song
XAI
30
0
0
28 Jul 2024
A Practical Review of Mechanistic Interpretability for Transformer-Based Language Models
A Practical Review of Mechanistic Interpretability for Transformer-Based Language Models
Daking Rai
Yilun Zhou
Shi Feng
Abulhair Saparov
Ziyu Yao
73
18
0
02 Jul 2024
Are Logistic Models Really Interpretable?
Are Logistic Models Really Interpretable?
Danial Dervovic
Freddy Lecue
Nicolas Marchesotti
Daniele Magazzeni
16
0
0
19 Jun 2024
Provably Better Explanations with Optimized Aggregation of Feature
  Attributions
Provably Better Explanations with Optimized Aggregation of Feature Attributions
Thomas Decker
Ananta R. Bhattarai
Jindong Gu
Volker Tresp
Florian Buettner
18
2
0
07 Jun 2024
Data Science Principles for Interpretable and Explainable AI
Data Science Principles for Interpretable and Explainable AI
Kris Sankaran
FaML
38
0
0
17 May 2024
Explainable AI (XAI) in Image Segmentation in Medicine, Industry, and
  Beyond: A Survey
Explainable AI (XAI) in Image Segmentation in Medicine, Industry, and Beyond: A Survey
Rokas Gipiškis
Chun-Wei Tsai
Olga Kurasova
49
5
0
02 May 2024
Toward Understanding the Disagreement Problem in Neural Network Feature
  Attribution
Toward Understanding the Disagreement Problem in Neural Network Feature Attribution
Niklas Koenen
Marvin N. Wright
FAtt
28
5
0
17 Apr 2024
CNN-based explanation ensembling for dataset, representation and
  explanations evaluation
CNN-based explanation ensembling for dataset, representation and explanations evaluation
Weronika Hryniewska-Guzik
Luca Longo
P. Biecek
FAtt
43
0
0
16 Apr 2024
Comprehensible Artificial Intelligence on Knowledge Graphs: A survey
Comprehensible Artificial Intelligence on Knowledge Graphs: A survey
Simon Schramm
C. Wehner
Ute Schmid
22
25
0
04 Apr 2024
What Does Evaluation of Explainable Artificial Intelligence Actually
  Tell Us? A Case for Compositional and Contextual Validation of XAI Building
  Blocks
What Does Evaluation of Explainable Artificial Intelligence Actually Tell Us? A Case for Compositional and Contextual Validation of XAI Building Blocks
Kacper Sokol
Julia E. Vogt
18
11
0
19 Mar 2024
Prospector Heads: Generalized Feature Attribution for Large Models &
  Data
Prospector Heads: Generalized Feature Attribution for Large Models & Data
Gautam Machiraju
Alexander Derry
Arjun D Desai
Neel Guha
Amir-Hossein Karimi
James Zou
Russ Altman
Christopher Ré
Parag Mallick
AI4TS
MedIm
41
0
0
18 Feb 2024
Evaluating the Utility of Model Explanations for Model Development
Evaluating the Utility of Model Explanations for Model Development
Shawn Im
Jacob Andreas
Yilun Zhou
XAI
FAtt
ELM
11
1
0
10 Dec 2023
Error Discovery by Clustering Influence Embeddings
Error Discovery by Clustering Influence Embeddings
Fulton Wang
Julius Adebayo
Sarah Tan
Diego Garcia-Olano
Narine Kokhlikyan
11
3
0
07 Dec 2023
Advancing Post Hoc Case Based Explanation with Feature Highlighting
Advancing Post Hoc Case Based Explanation with Feature Highlighting
Eoin M. Kenny
Eoin Delaney
Markt. Keane
18
5
0
06 Nov 2023
How Well Do Feature-Additive Explainers Explain Feature-Additive
  Predictors?
How Well Do Feature-Additive Explainers Explain Feature-Additive Predictors?
Zachariah Carmichael
Walter J. Scheirer
FAtt
20
4
0
27 Oct 2023
Instance-wise Linearization of Neural Network for Model Interpretation
Instance-wise Linearization of Neural Network for Model Interpretation
Zhimin Li
Shusen Liu
B. Kailkhura
Timo Bremer
Valerio Pascucci
MILM
FAtt
8
0
0
25 Oct 2023
Make Your Decision Convincing! A Unified Two-Stage Framework:
  Self-Attribution and Decision-Making
Make Your Decision Convincing! A Unified Two-Stage Framework: Self-Attribution and Decision-Making
Yanrui Du
Sendong Zhao
Hao Wang
Yuhan Chen
Rui Bai
Zewen Qiang
Muzhen Cai
Bing Qin
16
0
0
20 Oct 2023
Can Large Language Models Explain Themselves? A Study of LLM-Generated
  Self-Explanations
Can Large Language Models Explain Themselves? A Study of LLM-Generated Self-Explanations
Shiyuan Huang
Siddarth Mamidanna
Shreedhar Jangam
Yilun Zhou
Leilani H. Gilpin
LRM
MILM
ELM
19
64
0
17 Oct 2023
AttributionLab: Faithfulness of Feature Attribution Under Controllable
  Environments
AttributionLab: Faithfulness of Feature Attribution Under Controllable Environments
Yang Zhang
Yawei Li
Hannah Brown
Mina Rezaei
Bernd Bischl
Philip H. S. Torr
Ashkan Khakzar
Kenji Kawaguchi
OOD
47
1
0
10 Oct 2023
COSE: A Consistency-Sensitivity Metric for Saliency on Image
  Classification
COSE: A Consistency-Sensitivity Metric for Saliency on Image Classification
Rangel Daroya
Aaron Sun
Subhransu Maji
11
0
0
20 Sep 2023
A Dual-Perspective Approach to Evaluating Feature Attribution Methods
A Dual-Perspective Approach to Evaluating Feature Attribution Methods
Yawei Li
Yanglin Zhang
Kenji Kawaguchi
Ashkan Khakzar
Bernd Bischl
Mina Rezaei
FAtt
XAI
39
0
0
17 Aug 2023
Differential Privacy, Linguistic Fairness, and Training Data Influence:
  Impossibility and Possibility Theorems for Multilingual Language Models
Differential Privacy, Linguistic Fairness, and Training Data Influence: Impossibility and Possibility Theorems for Multilingual Language Models
Phillip Rust
Anders Søgaard
17
3
0
17 Aug 2023
Generative Perturbation Analysis for Probabilistic Black-Box Anomaly
  Attribution
Generative Perturbation Analysis for Probabilistic Black-Box Anomaly Attribution
T. Idé
Naoki Abe
13
4
0
09 Aug 2023
Is Last Layer Re-Training Truly Sufficient for Robustness to Spurious
  Correlations?
Is Last Layer Re-Training Truly Sufficient for Robustness to Spurious Correlations?
Phuong Quynh Le
Jorg Schlotterer
Christin Seifert
OOD
4
6
0
01 Aug 2023
What's meant by explainable model: A Scoping Review
What's meant by explainable model: A Scoping Review
Mallika Mainali
Rosina O. Weber
XAI
13
0
0
18 Jul 2023
Probabilistic Constrained Reinforcement Learning with Formal
  Interpretability
Probabilistic Constrained Reinforcement Learning with Formal Interpretability
Yanran Wang
Qiuchen Qian
David E. Boyle
8
3
0
13 Jul 2023
Stability Guarantees for Feature Attributions with Multiplicative
  Smoothing
Stability Guarantees for Feature Attributions with Multiplicative Smoothing
Anton Xue
Rajeev Alur
Eric Wong
25
5
0
12 Jul 2023
Fixing confirmation bias in feature attribution methods via semantic
  match
Fixing confirmation bias in feature attribution methods via semantic match
Giovanni Cina
Daniel Fernandez-Llaneza
Ludovico Deponte
Nishant Mishra
Tabea E. Rober
Sandro Pezzelle
Iacer Calixto
Rob Goedhart
cS. .Ilker Birbil
FAtt
11
0
0
03 Jul 2023
XAI-TRIS: Non-linear image benchmarks to quantify false positive
  post-hoc attribution of feature importance
XAI-TRIS: Non-linear image benchmarks to quantify false positive post-hoc attribution of feature importance
Benedict Clark
Rick Wilming
Stefan Haufe
9
4
0
22 Jun 2023
Benchmark data to study the influence of pre-training on explanation
  performance in MR image classification
Benchmark data to study the influence of pre-training on explanation performance in MR image classification
Marta Oliveira
Rick Wilming
Benedict Clark
Céline Budding
Fabian Eitel
K. Ritter
Stefan Haufe
11
1
0
21 Jun 2023
A Unified Concept-Based System for Local, Global, and Misclassification
  Explanations
A Unified Concept-Based System for Local, Global, and Misclassification Explanations
Fatemeh Aghaeipoor
D. Asgarian
Mohammad Sabokrou
FAtt
11
0
0
06 Jun 2023
Can We Trust Explainable AI Methods on ASR? An Evaluation on Phoneme
  Recognition
Can We Trust Explainable AI Methods on ASR? An Evaluation on Phoneme Recognition
Xiao-lan Wu
P. Bell
A. Rajan
19
5
0
29 May 2023
When a CBR in Hand is Better than Twins in the Bush
When a CBR in Hand is Better than Twins in the Bush
Mobyen Uddin Ahmed
Shaibal Barua
Shahina Begum
Mir Riyanul Islam
Rosina O. Weber
20
1
0
09 May 2023
Neighboring Words Affect Human Interpretation of Saliency Explanations
Neighboring Words Affect Human Interpretation of Saliency Explanations
Tim Dockhorn
Yaoliang Yu
Heike Adel
Mahdi Zolnouri
V. Nia
FAtt
MILM
28
3
0
04 May 2023
Interpretable (not just posthoc-explainable) heterogeneous survivor
  bias-corrected treatment effects for assignment of postdischarge
  interventions to prevent readmissions
Interpretable (not just posthoc-explainable) heterogeneous survivor bias-corrected treatment effects for assignment of postdischarge interventions to prevent readmissions
Hongjing Xia
Joshua C. Chang
S. Nowak
Sonya Mahajan
R. Mahajan
Ted L. Chang
Carson C. Chow
20
1
0
19 Apr 2023
Quantifying and Explaining Machine Learning Uncertainty in Predictive
  Process Monitoring: An Operations Research Perspective
Quantifying and Explaining Machine Learning Uncertainty in Predictive Process Monitoring: An Operations Research Perspective
Nijat Mehdiyev
Maxim Majlatow
Peter Fettke
6
11
0
13 Apr 2023
Why is plausibility surprisingly problematic as an XAI criterion?
Why is plausibility surprisingly problematic as an XAI criterion?
Weina Jin
Xiaoxiao Li
Ghassan Hamarneh
39
2
0
30 Mar 2023
Are Data-driven Explanations Robust against Out-of-distribution Data?
Are Data-driven Explanations Robust against Out-of-distribution Data?
Tang Li
Fengchun Qiao
Mengmeng Ma
Xiangkai Peng
OODD
OOD
28
10
0
29 Mar 2023
Iterative Partial Fulfillment of Counterfactual Explanations: Benefits
  and Risks
Iterative Partial Fulfillment of Counterfactual Explanations: Benefits and Risks
Yilun Zhou
18
0
0
17 Mar 2023
Finding the right XAI method -- A Guide for the Evaluation and Ranking
  of Explainable AI Methods in Climate Science
Finding the right XAI method -- A Guide for the Evaluation and Ranking of Explainable AI Methods in Climate Science
P. Bommer
M. Kretschmer
Anna Hedström
Dilyara Bareeva
Marina M.-C. Höhne
29
37
0
01 Mar 2023
12
Next