Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2009.10639
Cited By
What Do You See? Evaluation of Explainable Artificial Intelligence (XAI) Interpretability through Neural Backdoors
22 September 2020
Yi-Shan Lin
Wen-Chuan Lee
Z. Berkay Celik
XAI
Re-assign community
ArXiv
PDF
HTML
Papers citing
"What Do You See? Evaluation of Explainable Artificial Intelligence (XAI) Interpretability through Neural Backdoors"
8 / 8 papers shown
Title
Evaluating Explanation Quality in X-IDS Using Feature Alignment Metrics
Mohammed Alquliti
Erisa Karafili
BooJoong Kang
XAI
19
0
0
12 May 2025
Inpainting the Gaps: A Novel Framework for Evaluating Explanation Methods in Vision Transformers
Lokesh Badisa
Sumohana S. Channappayya
40
0
0
17 Jun 2024
Understanding User Preferences in Explainable Artificial Intelligence: A Survey and a Mapping Function Proposal
M. Hashemi
Ali Darejeh
Francisco Cruz
27
3
0
07 Feb 2023
SoK: Explainable Machine Learning for Computer Security Applications
A. Nadeem
D. Vos
Clinton Cao
Luca Pajola
Simon Dieck
Robert Baumgartner
S. Verwer
17
40
0
22 Aug 2022
Auditing Visualizations: Transparency Methods Struggle to Detect Anomalous Behavior
Jean-Stanislas Denain
Jacob Steinhardt
AAML
10
7
0
27 Jun 2022
TRAPDOOR: Repurposing backdoors to detect dataset bias in machine learning-based genomic analysis
Esha Sarkar
Michail Maniatakos
24
3
0
14 Aug 2021
Exemplary Natural Images Explain CNN Activations Better than State-of-the-Art Feature Visualization
Judy Borowski
Roland S. Zimmermann
Judith Schepers
Robert Geirhos
Thomas S. A. Wallis
Matthias Bethge
Wieland Brendel
FAtt
21
7
0
23 Oct 2020
SentiNet: Detecting Localized Universal Attacks Against Deep Learning Systems
Edward Chou
Florian Tramèr
Giancarlo Pellegrino
AAML
168
286
0
02 Dec 2018
1