Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1711.11279
Cited By
Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV)
30 November 2017
Been Kim
Martin Wattenberg
Justin Gilmer
Carrie J. Cai
James Wexler
F. Viégas
Rory Sayres
FAtt
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV)"
50 / 1,046 papers shown
Title
HypoML: Visual Analysis for Hypothesis-based Evaluation of Machine Learning Models
Qianwen Wang
W. Alexander
J. Pegg
Huamin Qu
Min Chen
VLM
20
10
0
12 Feb 2020
Explaining Explanations: Axiomatic Feature Interactions for Deep Networks
Joseph D. Janizek
Pascal Sturmfels
Su-In Lee
FAtt
27
143
0
10 Feb 2020
Adversarial TCAV -- Robust and Effective Interpretation of Intermediate Layers in Neural Networks
Rahul Soni
Naresh Shah
Chua Tat Seng
J. D. Moore
AAML
FAtt
12
8
0
10 Feb 2020
CHAIN: Concept-harmonized Hierarchical Inference Interpretation of Deep Convolutional Neural Networks
Dan Wang
Xinrui Cui
F. I. Z. Jane Wang
AI4CE
6
14
0
05 Feb 2020
Concept Whitening for Interpretable Image Recognition
Zhi Chen
Yijie Bei
Cynthia Rudin
FAtt
28
313
0
05 Feb 2020
Bridging the Gap: Providing Post-Hoc Symbolic Explanations for Sequential Decision-Making Problems with Inscrutable Representations
S. Sreedharan
Utkarsh Soni
Mudit Verma
Siddharth Srivastava
S. Kambhampati
68
30
0
04 Feb 2020
Evaluating Saliency Map Explanations for Convolutional Neural Networks: A User Study
Ahmed Alqaraawi
M. Schuessler
Philipp Weiß
Enrico Costanza
N. Bianchi-Berthouze
AAML
FAtt
XAI
22
197
0
03 Feb 2020
Interpreting video features: a comparison of 3D convolutional networks and convolutional LSTM networks
Joonatan Mänttäri
Sofia Broomé
John Folkesson
Hedvig Kjellström
FAtt
19
27
0
02 Feb 2020
On Interpretability of Artificial Neural Networks: A Survey
Fenglei Fan
Jinjun Xiong
Mengzhou Li
Ge Wang
AAML
AI4CE
38
300
0
08 Jan 2020
Restricting the Flow: Information Bottlenecks for Attribution
Karl Schulz
Leon Sixt
Federico Tombari
Tim Landgraf
FAtt
6
182
0
02 Jan 2020
Finding and Removing Clever Hans: Using Explanation Methods to Debug and Improve Deep Models
Christopher J. Anders
Talmaj Marinc
David Neumann
Wojciech Samek
K. Müller
Sebastian Lapuschkin
AAML
24
20
0
22 Dec 2019
When Explanations Lie: Why Many Modified BP Attributions Fail
Leon Sixt
Maximilian Granz
Tim Landgraf
BDL
FAtt
XAI
13
132
0
20 Dec 2019
TopoAct: Visually Exploring the Shape of Activations in Deep Learning
Archit Rathore
N. Chalapathi
Sourabh Palande
Bei Wang
17
8
0
13 Dec 2019
Identity Preserve Transform: Understand What Activity Classification Models Have Learnt
Jialing Lyu
Weichao Qiu
Xinyue Wei
Yi Zhang
Alan Yuille
Zhengjun Zha
VLM
19
3
0
13 Dec 2019
A Programmatic and Semantic Approach to Explaining and DebuggingNeural Network Based Object Detectors
Edward J. Kim
D. Gopinath
C. Păsăreanu
S. Seshia
8
26
0
01 Dec 2019
Attributional Robustness Training using Input-Gradient Spatial Alignment
M. Singh
Nupur Kumari
Puneet Mangla
Abhishek Sinha
V. Balasubramanian
Balaji Krishnamurthy
OOD
21
10
0
29 Nov 2019
Towards Quantification of Explainability in Explainable Artificial Intelligence Methods
Sheikh Rabiul Islam
W. Eberle
S. Ghafoor
XAI
12
42
0
22 Nov 2019
Domain Knowledge Aided Explainable Artificial Intelligence for Intrusion Detection and Response
Sheikh Rabiul Islam
W. Eberle
S. Ghafoor
Ambareen Siraj
Mike Rogers
11
39
0
22 Nov 2019
Towards a Unified Evaluation of Explanation Methods without Ground Truth
Hao Zhang
Jiayi Chen
Haotian Xue
Quanshi Zhang
XAI
21
7
0
20 Nov 2019
Enhancing the Extraction of Interpretable Information for Ischemic Stroke Imaging from Deep Neural Networks
Erico Tjoa
Heng Guo
Yuhao Lu
Cuntai Guan
FAtt
14
5
0
19 Nov 2019
Fooling LIME and SHAP: Adversarial Attacks on Post hoc Explanation Methods
Dylan Slack
Sophie Hilgard
Emily Jia
Sameer Singh
Himabindu Lakkaraju
FAtt
AAML
MLAU
16
803
0
06 Nov 2019
Deep convolutional neural networks for multi-scale time-series classification and application to disruption prediction in fusion devices
R. Churchill
the DIII-D team
AI4CE
22
10
0
31 Oct 2019
Weight of Evidence as a Basis for Human-Oriented Explanations
David Alvarez-Melis
Hal Daumé
Jennifer Wortman Vaughan
Hanna M. Wallach
XAI
FAtt
15
20
0
29 Oct 2019
Concept Saliency Maps to Visualize Relevant Features in Deep Generative Models
L. Brocki
N. C. Chung
FAtt
20
21
0
29 Oct 2019
CXPlain: Causal Explanations for Model Interpretation under Uncertainty
Patrick Schwab
W. Karlen
FAtt
CML
29
205
0
27 Oct 2019
Fair Generative Modeling via Weak Supervision
Kristy Choi
Aditya Grover
Trisha Singh
Rui Shu
Stefano Ermon
28
134
0
26 Oct 2019
Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI
Alejandro Barredo Arrieta
Natalia Díaz Rodríguez
Javier Del Ser
Adrien Bennetot
S. Tabik
...
S. Gil-Lopez
Daniel Molina
Richard Benjamins
Raja Chatila
Francisco Herrera
XAI
37
6,110
0
22 Oct 2019
Semantics for Global and Local Interpretation of Deep Neural Networks
Jindong Gu
Volker Tresp
AI4CE
22
14
0
21 Oct 2019
Understanding Deep Networks via Extremal Perturbations and Smooth Masks
Ruth C. Fong
Mandela Patrick
Andrea Vedaldi
AAML
25
411
0
18 Oct 2019
On Completeness-aware Concept-Based Explanations in Deep Neural Networks
Chih-Kuan Yeh
Been Kim
Sercan Ö. Arik
Chun-Liang Li
Tomas Pfister
Pradeep Ravikumar
FAtt
122
297
0
17 Oct 2019
Iterative Augmentation of Visual Evidence for Weakly-Supervised Lesion Localization in Deep Interpretability Frameworks: Application to Color Fundus Images
C. González-Gonzalo
B. Liefers
Bram van Ginneken
C. I. Sánchez
MedIm
33
29
0
16 Oct 2019
How are attributes expressed in face DCNNs?
Prithviraj Dhar
Ankan Bansal
Carlos D. Castillo
Joshua Gleason
P. Phillips
Rama Chellappa
CVBM
19
28
0
12 Oct 2019
Towards Explainable Artificial Intelligence
Wojciech Samek
K. Müller
XAI
27
436
0
26 Sep 2019
Explaining Visual Models by Causal Attribution
Álvaro Parafita
Jordi Vitrià
CML
FAtt
62
35
0
19 Sep 2019
Semantically Interpretable Activation Maps: what-where-how explanations within CNNs
Diego Marcos
Sylvain Lobry
D. Tuia
FAtt
MILM
17
26
0
18 Sep 2019
X-ToM: Explaining with Theory-of-Mind for Gaining Justified Human Trust
Arjun Reddy Akula
Changsong Liu
Sari Saba-Sadiya
Hongjing Lu
S. Todorovic
J. Chai
Song-Chun Zhu
22
18
0
15 Sep 2019
Explainable Deep Learning for Video Recognition Tasks: A Framework & Recommendations
Liam Hiley
Alun D. Preece
Y. Hicks
XAI
11
15
0
07 Sep 2019
One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques
Vijay Arya
Rachel K. E. Bellamy
Pin-Yu Chen
Amit Dhurandhar
Michael Hind
...
Karthikeyan Shanmugam
Moninder Singh
Kush R. Varshney
Dennis L. Wei
Yunfeng Zhang
XAI
8
390
0
06 Sep 2019
Fairness in Deep Learning: A Computational Perspective
Mengnan Du
Fan Yang
Na Zou
Xia Hu
FaML
FedML
8
229
0
23 Aug 2019
Computing Linear Restrictions of Neural Networks
Matthew Sotoudeh
Aditya V. Thakur
6
24
0
17 Aug 2019
LoRMIkA: Local rule-based model interpretability with k-optimal associations
Dilini Sewwandi Rajapaksha
Christoph Bergmeir
Wray L. Buntine
22
30
0
11 Aug 2019
explAIner: A Visual Analytics Framework for Interactive and Explainable Machine Learning
Thilo Spinner
U. Schlegel
H. Schäfer
Mennatallah El-Assady
HAI
15
234
0
29 Jul 2019
Interpretability Beyond Classification Output: Semantic Bottleneck Networks
M. Losch
Mario Fritz
Bernt Schiele
UQCV
25
60
0
25 Jul 2019
Benchmarking Attribution Methods with Relative Feature Importance
Mengjiao Yang
Been Kim
FAtt
XAI
11
140
0
23 Jul 2019
A Survey on Explainable Artificial Intelligence (XAI): Towards Medical XAI
Erico Tjoa
Cuntai Guan
XAI
42
1,414
0
17 Jul 2019
Explaining Classifiers with Causal Concept Effect (CaCE)
Yash Goyal
Amir Feder
Uri Shalit
Been Kim
CML
8
172
0
16 Jul 2019
Evaluating Explanation Without Ground Truth in Interpretable Machine Learning
Fan Yang
Mengnan Du
Xia Hu
XAI
ELM
27
66
0
16 Jul 2019
The What-If Tool: Interactive Probing of Machine Learning Models
James Wexler
Mahima Pushkarna
Tolga Bolukbasi
Martin Wattenberg
F. Viégas
Jimbo Wilson
VLM
32
485
0
09 Jul 2019
Generative Counterfactual Introspection for Explainable Deep Learning
Shusen Liu
B. Kailkhura
Donald Loveland
Yong Han
17
90
0
06 Jul 2019
Interpretable Counterfactual Explanations Guided by Prototypes
A. V. Looveren
Janis Klaise
FAtt
11
378
0
03 Jul 2019
Previous
1
2
3
...
19
20
21
Next