Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1711.11279
Cited By
Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV)
30 November 2017
Been Kim
Martin Wattenberg
Justin Gilmer
Carrie J. Cai
James Wexler
F. Viégas
Rory Sayres
FAtt
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV)"
50 / 1,046 papers shown
Title
Explainable AI and Adoption of Financial Algorithmic Advisors: an Experimental Study
D. David
Yehezkel S. Resheff
Talia Tron
11
23
0
05 Jan 2021
FastIF: Scalable Influence Functions for Efficient Model Interpretation and Debugging
Han Guo
Nazneen Rajani
Peter Hase
Mohit Bansal
Caiming Xiong
TDI
33
102
0
31 Dec 2020
Quantitative Evaluations on Saliency Methods: An Experimental Study
Xiao-hui Li
Yuhan Shi
Haoyang Li
Wei Bai
Yuanwei Song
Caleb Chen Cao
Lei Chen
FAtt
XAI
39
18
0
31 Dec 2020
A Survey on Neural Network Interpretability
Yu Zhang
Peter Tiño
A. Leonardis
K. Tang
FaML
XAI
144
660
0
28 Dec 2020
Analyzing Representations inside Convolutional Neural Networks
Uday Singh Saini
Evangelos E. Papalexakis
FAtt
11
2
0
23 Dec 2020
Towards Robust Explanations for Deep Neural Networks
Ann-Kathrin Dombrowski
Christopher J. Anders
K. Müller
Pan Kessel
FAtt
18
62
0
18 Dec 2020
AdjointBackMap: Reconstructing Effective Decision Hypersurfaces from CNN Layers Using Adjoint Operators
Qing Wan
Yoonsuck Choe
20
1
0
16 Dec 2020
MEME: Generating RNN Model Explanations via Model Extraction
Dmitry Kazhdan
B. Dimanov
M. Jamnik
Pietro Lió
LRM
13
13
0
13 Dec 2020
Large-Scale Generative Data-Free Distillation
Liangchen Luo
Mark Sandler
Zi Lin
A. Zhmoginov
Andrew G. Howard
19
43
0
10 Dec 2020
Debiased-CAM to mitigate image perturbations with faithful visual explanations of machine learning
Wencan Zhang
Mariella Dimiccoli
Brian Y. Lim
FAtt
21
18
0
10 Dec 2020
Investigating Bias in Image Classification using Model Explanations
S. Tong
Lalana Kagal
FaML
FAtt
XAI
16
15
0
10 Dec 2020
Understanding Interpretability by generalized distillation in Supervised Classification
Adit Agarwal
Dr. K.K. Shukla
Arjan Kuijper
Anirban Mukhopadhyay
FaML
FAtt
19
0
0
05 Dec 2020
Learning Interpretable Concept-Based Models with Human Feedback
Isaac Lage
Finale Doshi-Velez
20
24
0
04 Dec 2020
Concept-based model explanations for Electronic Health Records
Diana Mincu
Eric Loreaux
Shaobo Hou
Sebastien Baur
Ivan V. Protsyuk
Martin G. Seneviratne
A. Mottram
Nenad Tomašev
Alan Karthikesanlingam
Jessica Schrouff
9
27
0
03 Dec 2020
Neural Prototype Trees for Interpretable Fine-grained Image Recognition
Meike Nauta
Ron van Bree
C. Seifert
71
261
0
03 Dec 2020
Classifying bacteria clones using attention-based deep multiple instance learning interpreted by persistence homology
Adriana Borowa
Dawid Rymarczyk
D. Ochonska
M. Brzychczy-Wloch
Bartosz Zieliñski
6
7
0
02 Dec 2020
ProtoPShare: Prototype Sharing for Interpretable Image Classification and Similarity Discovery
Dawid Rymarczyk
Lukasz Struski
Jacek Tabor
Bartosz Zieliñski
19
111
0
29 Nov 2020
Teaching the Machine to Explain Itself using Domain Knowledge
Vladimir Balayan
Pedro Saleiro
Catarina Belém
L. Krippahl
P. Bizarro
13
8
0
27 Nov 2020
Achievements and Challenges in Explaining Deep Learning based Computer-Aided Diagnosis Systems
Adriano Lucieri
Muhammad Naseer Bajwa
Andreas Dengel
Sheraz Ahmed
35
10
0
26 Nov 2020
Right for the Right Concept: Revising Neuro-Symbolic Concepts by Interacting with their Explanations
Wolfgang Stammer
P. Schramowski
Kristian Kersting
FAtt
14
106
0
25 Nov 2020
Quantifying Explainers of Graph Neural Networks in Computational Pathology
Guillaume Jaume
Pushpak Pati
Behzad Bozorgtabar
Antonio Foncubierta-Rodríguez
Florinda Feroce
A. Anniciello
T. Rau
Jean-Philippe Thiran
M. Gabrani
O. Goksel
FAtt
23
76
0
25 Nov 2020
Debiasing Convolutional Neural Networks via Meta Orthogonalization
Kurtis Evan David
Qiang Liu
Ruth C. Fong
FaML
15
3
0
15 Nov 2020
One Explanation is Not Enough: Structured Attention Graphs for Image Classification
Vivswan Shitole
Li Fuxin
Minsuk Kahng
Prasad Tadepalli
Alan Fern
FAtt
GNN
6
38
0
13 Nov 2020
Debugging Tests for Model Explanations
Julius Adebayo
M. Muelly
Ilaria Liccardi
Been Kim
FAtt
14
177
0
10 Nov 2020
What Did You Think Would Happen? Explaining Agent Behaviour Through Intended Outcomes
Herman Yau
Chris Russell
Simon Hadfield
FAtt
LRM
26
36
0
10 Nov 2020
Quantifying Learnability and Describability of Visual Concepts Emerging in Representation Learning
Iro Laina
Ruth C. Fong
Andrea Vedaldi
OCL
22
13
0
27 Oct 2020
Benchmarking Deep Learning Interpretability in Time Series Predictions
Aya Abdelsalam Ismail
Mohamed K. Gunady
H. C. Bravo
S. Feizi
XAI
AI4TS
FAtt
6
166
0
26 Oct 2020
Now You See Me (CME): Concept-based Model Extraction
Dmitry Kazhdan
B. Dimanov
M. Jamnik
Pietro Lió
Adrian Weller
12
72
0
25 Oct 2020
Exemplary Natural Images Explain CNN Activations Better than State-of-the-Art Feature Visualization
Judy Borowski
Roland S. Zimmermann
Judith Schepers
Robert Geirhos
Thomas S. A. Wallis
Matthias Bethge
Wieland Brendel
FAtt
36
7
0
23 Oct 2020
Towards falsifiable interpretability research
Matthew L. Leavitt
Ari S. Morcos
AAML
AI4CE
13
67
0
22 Oct 2020
A Survey on Deep Learning and Explainability for Automatic Report Generation from Medical Images
Pablo Messina
Pablo Pino
Denis Parra
Alvaro Soto
Cecilia Besa
S. Uribe
Marcelo andía
C. Tejos
Claudia Prieto
Daniel Capurro
MedIm
30
62
0
20 Oct 2020
A Framework to Learn with Interpretation
Jayneel Parekh
Pavlo Mozharovskyi
Florence dÁlché-Buc
AI4CE
FAtt
17
30
0
19 Oct 2020
Evaluating Attribution Methods using White-Box LSTMs
Sophie Hao
FAtt
XAI
8
8
0
16 Oct 2020
Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and Goals of Human Trust in AI
Alon Jacovi
Ana Marasović
Tim Miller
Yoav Goldberg
249
425
0
15 Oct 2020
Human-interpretable model explainability on high-dimensional data
Damien de Mijolla
Christopher Frye
M. Kunesch
J. Mansir
Ilya Feige
FAtt
17
8
0
14 Oct 2020
Integrating Intrinsic and Extrinsic Explainability: The Relevance of Understanding Neural Networks for Human-Robot Interaction
Tom Weber
S. Wermter
8
4
0
09 Oct 2020
Simplifying the explanation of deep neural networks with sufficient and necessary feature-sets: case of text classification
Florentin Flambeau Jiechieu Kameni
Norbert Tsopzé
XAI
FAtt
MedIm
14
1
0
08 Oct 2020
Explaining Deep Neural Networks
Oana-Maria Camburu
XAI
FAtt
25
26
0
04 Oct 2020
Trustworthy Convolutional Neural Networks: A Gradient Penalized-based Approach
Nicholas F Halliwell
Freddy Lecue
FAtt
17
9
0
29 Sep 2020
Disentangled Neural Architecture Search
Xinyue Zheng
Peng Wang
Qigang Wang
Zhongchao Shi
AI4CE
22
4
0
24 Sep 2020
The Struggles of Feature-Based Explanations: Shapley Values vs. Minimal Sufficient Subsets
Oana-Maria Camburu
Eleonora Giunchiglia
Jakob N. Foerster
Thomas Lukasiewicz
Phil Blunsom
FAtt
15
23
0
23 Sep 2020
What Do You See? Evaluation of Explainable Artificial Intelligence (XAI) Interpretability through Neural Backdoors
Yi-Shan Lin
Wen-Chuan Lee
Z. Berkay Celik
XAI
26
93
0
22 Sep 2020
Introspective Learning by Distilling Knowledge from Online Self-explanation
Jindong Gu
Zhiliang Wu
Volker Tresp
15
3
0
19 Sep 2020
Contextual Semantic Interpretability
Diego Marcos
Ruth C. Fong
Sylvain Lobry
Rémi Flamary
Nicolas Courty
D. Tuia
SSL
12
27
0
18 Sep 2020
Are Interpretations Fairly Evaluated? A Definition Driven Pipeline for Post-Hoc Interpretability
Ninghao Liu
Yunsong Meng
Xia Hu
Tie Wang
Bo Long
XAI
FAtt
23
7
0
16 Sep 2020
Understanding the Role of Individual Units in a Deep Neural Network
David Bau
Jun-Yan Zhu
Hendrik Strobelt
Àgata Lapedriza
Bolei Zhou
Antonio Torralba
GAN
12
433
0
10 Sep 2020
Quantifying Explainability of Saliency Methods in Deep Neural Networks with a Synthetic Dataset
Erico Tjoa
Cuntai Guan
XAI
FAtt
11
27
0
07 Sep 2020
Bluff: Interactively Deciphering Adversarial Attacks on Deep Neural Networks
Nilaksh Das
Haekyu Park
Zijie J. Wang
Fred Hohman
Robert Firstman
Emily Rogers
Duen Horng Chau
AAML
16
26
0
05 Sep 2020
Generalization on the Enhancement of Layerwise Relevance Interpretability of Deep Neural Network
Erico Tjoa
Cuntai Guan
FAtt
8
0
0
05 Sep 2020
Making Neural Networks Interpretable with Attribution: Application to Implicit Signals Prediction
Darius Afchar
Romain Hennequin
FAtt
XAI
36
16
0
26 Aug 2020
Previous
1
2
3
...
17
18
19
20
21
Next