ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2011.03623
  4. Cited By
Feature Removal Is a Unifying Principle for Model Explanation Methods

Feature Removal Is a Unifying Principle for Model Explanation Methods

6 November 2020
Ian Covert
Scott M. Lundberg
Su-In Lee
    FAtt
ArXivPDFHTML

Papers citing "Feature Removal Is a Unifying Principle for Model Explanation Methods"

20 / 20 papers shown
Title
CAGE: Causality-Aware Shapley Value for Global Explanations
CAGE: Causality-Aware Shapley Value for Global Explanations
Nils Ole Breuer
Andreas Sauter
Majid Mohammadi
Erman Acar
FAtt
29
2
0
17 Apr 2024
SmoothHess: ReLU Network Feature Interactions via Stein's Lemma
SmoothHess: ReLU Network Feature Interactions via Stein's Lemma
Max Torop
A. Masoomi
Davin Hill
Kivanc Kose
Stratis Ioannidis
Jennifer Dy
23
4
0
01 Nov 2023
Faithful Knowledge Graph Explanations for Commonsense Reasoning
Faithful Knowledge Graph Explanations for Commonsense Reasoning
Weihe Zhai
A. Zubiaga
17
0
0
07 Oct 2023
On the Connection between Game-Theoretic Feature Attributions and
  Counterfactual Explanations
On the Connection between Game-Theoretic Feature Attributions and Counterfactual Explanations
Emanuele Albini
Shubham Sharma
Saumitra Mishra
Danial Dervovic
Daniele Magazzeni
FAtt
38
2
0
13 Jul 2023
Prototype-Based Interpretability for Legal Citation Prediction
Prototype-Based Interpretability for Legal Citation Prediction
Chunyan Luo
R. Bhambhoria
Samuel Dahan
Xiao-Dan Zhu
ELM
AILaw
11
7
0
25 May 2023
Explanations of Black-Box Models based on Directional Feature
  Interactions
Explanations of Black-Box Models based on Directional Feature Interactions
A. Masoomi
Davin Hill
Zhonghui Xu
C. Hersh
E. Silverman
P. Castaldi
Stratis Ioannidis
Jennifer Dy
FAtt
21
17
0
16 Apr 2023
Contrastive Video Question Answering via Video Graph Transformer
Contrastive Video Question Answering via Video Graph Transformer
Junbin Xiao
Pan Zhou
Angela Yao
Yicong Li
Richang Hong
Shuicheng Yan
Tat-Seng Chua
ViT
19
35
0
27 Feb 2023
Boundary-Aware Uncertainty for Feature Attribution Explainers
Boundary-Aware Uncertainty for Feature Attribution Explainers
Davin Hill
A. Masoomi
Max Torop
S. Ghimire
Jennifer Dy
FAtt
55
3
0
05 Oct 2022
How explainable are adversarially-robust CNNs?
How explainable are adversarially-robust CNNs?
Mehdi Nourelahi
Lars Kotthoff
Peijie Chen
Anh Totti Nguyen
AAML
FAtt
14
8
0
25 May 2022
Reinforced Causal Explainer for Graph Neural Networks
Reinforced Causal Explainer for Graph Neural Networks
Xiang Wang
Y. Wu
An Zhang
Fuli Feng
Xiangnan He
Tat-Seng Chua
CML
17
46
0
23 Apr 2022
Explainability in Music Recommender Systems
Explainability in Music Recommender Systems
Darius Afchar
Alessandro B. Melchiorre
Markus Schedl
Romain Hennequin
Elena V. Epure
Manuel Moussallam
16
48
0
25 Jan 2022
Deconfounding to Explanation Evaluation in Graph Neural Networks
Deconfounding to Explanation Evaluation in Graph Neural Networks
Yingmin Wu
Xiang Wang
An Zhang
Xia Hu
Fuli Feng
Xiangnan He
Tat-Seng Chua
FAtt
CML
12
14
0
21 Jan 2022
Double Trouble: How to not explain a text classifier's decisions using
  counterfactuals synthesized by masked language models?
Double Trouble: How to not explain a text classifier's decisions using counterfactuals synthesized by masked language models?
Thang M. Pham
Trung H. Bui
Long Mai
Anh Totti Nguyen
21
7
0
22 Oct 2021
Shapley variable importance clouds for interpretable machine learning
Shapley variable importance clouds for interpretable machine learning
Yilin Ning
M. Ong
Bibhas Chakraborty
B. Goldstein
Daniel Ting
Roger Vaughan
Nan Liu
FAtt
11
69
0
06 Oct 2021
Attribution of Predictive Uncertainties in Classification Models
Attribution of Predictive Uncertainties in Classification Models
Iker Perez
Piotr Skalski
Alec E. Barns-Graham
Jason Wong
David Sutton
UQCV
14
5
0
19 Jul 2021
The effectiveness of feature attribution methods and its correlation
  with automatic evaluation scores
The effectiveness of feature attribution methods and its correlation with automatic evaluation scores
Giang Nguyen
Daeyoung Kim
Anh Totti Nguyen
FAtt
8
86
0
31 May 2021
Can We Faithfully Represent Masked States to Compute Shapley Values on a
  DNN?
Can We Faithfully Represent Masked States to Compute Shapley Values on a DNN?
J. Ren
Zhanpeng Zhou
Qirui Chen
Quanshi Zhang
FAtt
TDI
19
8
0
22 May 2021
Towards Rigorous Interpretations: a Formalisation of Feature Attribution
Towards Rigorous Interpretations: a Formalisation of Feature Attribution
Darius Afchar
Romain Hennequin
Vincent Guigue
FAtt
29
20
0
26 Apr 2021
Have We Learned to Explain?: How Interpretability Methods Can Learn to
  Encode Predictions in their Interpretations
Have We Learned to Explain?: How Interpretability Methods Can Learn to Encode Predictions in their Interpretations
N. Jethani
Mukund Sudarshan
Yindalon Aphinyanagphongs
Rajesh Ranganath
FAtt
78
70
0
02 Mar 2021
PredDiff: Explanations and Interactions from Conditional Expectations
PredDiff: Explanations and Interactions from Conditional Expectations
Stefan Blücher
Johanna Vielhaben
Nils Strodthoff
FAtt
17
19
0
26 Feb 2021
1