ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2302.12893
  4. Cited By
Don't be fooled: label leakage in explanation methods and the importance
  of their quantitative evaluation

Don't be fooled: label leakage in explanation methods and the importance of their quantitative evaluation

24 February 2023
N. Jethani
A. Saporta
Rajesh Ranganath
    FAtt
ArXivPDFHTML

Papers citing "Don't be fooled: label leakage in explanation methods and the importance of their quantitative evaluation"

11 / 11 papers shown
Title
Explanations that reveal all through the definition of encoding
Explanations that reveal all through the definition of encoding
A. Puli
Nhi Nguyen
Rajesh Ranganath
FAtt
XAI
31
1
0
04 Nov 2024
Local Feature Selection without Label or Feature Leakage for
  Interpretable Machine Learning Predictions
Local Feature Selection without Label or Feature Leakage for Interpretable Machine Learning Predictions
Harrie Oosterhuis
Lijun Lyu
Avishek Anand
FAtt
35
1
0
16 Jul 2024
Adaptive Sampling of k-Space in Magnetic Resonance for Rapid Pathology
  Prediction
Adaptive Sampling of k-Space in Magnetic Resonance for Rapid Pathology Prediction
Chen-Yu Yen
Raghav Singhal
Umang Sharma
Rajesh Ranganath
S. Chopra
Lerrel Pinto
19
1
0
06 Jun 2024
Explaining Time Series via Contrastive and Locally Sparse Perturbations
Explaining Time Series via Contrastive and Locally Sparse Perturbations
Zichuan Liu
Yingying Zhang
Tianchun Wang
Zefan Wang
Dongsheng Luo
...
Min Wu
Yi Wang
Chunlin Chen
Lunting Fan
Qingsong Wen
19
10
0
16 Jan 2024
Fast Shapley Value Estimation: A Unified Approach
Fast Shapley Value Estimation: A Unified Approach
Borui Zhang
Baotong Tian
Wenzhao Zheng
Jie Zhou
Jiwen Lu
TDI
FAtt
14
1
0
02 Nov 2023
Towards Robust Fidelity for Evaluating Explainability of Graph Neural
  Networks
Towards Robust Fidelity for Evaluating Explainability of Graph Neural Networks
Xu Zheng
Farhad Shirani
Tianchun Wang
Wei Cheng
Zhuomin Chen
Haifeng Chen
Hua Wei
Dongsheng Luo
24
7
0
03 Oct 2023
Beyond Single-Feature Importance with ICECREAM
Beyond Single-Feature Importance with ICECREAM
M.-J. Oesterle
Patrick Blobaum
Atalanti A. Mastakouri
Elke Kirschbaum
CML
14
1
0
19 Jul 2023
Explaining Predictive Uncertainty with Information Theoretic Shapley
  Values
Explaining Predictive Uncertainty with Information Theoretic Shapley Values
David S. Watson
Joshua O'Hara
Niek Tax
Richard Mudd
Ido Guy
TDI
FAtt
13
13
0
09 Jun 2023
FastSHAP: Real-Time Shapley Value Estimation
FastSHAP: Real-Time Shapley Value Estimation
N. Jethani
Mukund Sudarshan
Ian Covert
Su-In Lee
Rajesh Ranganath
TDI
FAtt
56
120
0
15 Jul 2021
Have We Learned to Explain?: How Interpretability Methods Can Learn to
  Encode Predictions in their Interpretations
Have We Learned to Explain?: How Interpretability Methods Can Learn to Encode Predictions in their Interpretations
N. Jethani
Mukund Sudarshan
Yindalon Aphinyanagphongs
Rajesh Ranganath
FAtt
73
54
0
02 Mar 2021
Densely Connected Convolutional Networks
Densely Connected Convolutional Networks
Gao Huang
Zhuang Liu
L. V. D. van der Maaten
Kilian Q. Weinberger
PINN
3DV
244
35,884
0
25 Aug 2016
1