ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1711.11279
  4. Cited By
Interpretability Beyond Feature Attribution: Quantitative Testing with
  Concept Activation Vectors (TCAV)

Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV)

30 November 2017
Been Kim
Martin Wattenberg
Justin Gilmer
Carrie J. Cai
James Wexler
F. Viégas
Rory Sayres
    FAtt
ArXivPDFHTML

Papers citing "Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV)"

50 / 1,045 papers shown
Title
iCaps: An Interpretable Classifier via Disentangled Capsule Networks
iCaps: An Interpretable Classifier via Disentangled Capsule Networks
Dahuin Jung
Jonghyun Lee
Jihun Yi
Sungroh Yoon
20
12
0
20 Aug 2020
Abstracting Deep Neural Networks into Concept Graphs for Concept Level
  Interpretability
Abstracting Deep Neural Networks into Concept Graphs for Concept Level Interpretability
Avinash Kori
Parth Natekar
Ganapathy Krishnamurthi
Balaji Srinivasan
18
8
0
14 Aug 2020
Survey of XAI in digital pathology
Survey of XAI in digital pathology
Milda Pocevičiūtė
Gabriel Eilertsen
Claes Lundström
6
56
0
14 Aug 2020
Learning Interpretable Microscopic Features of Tumor by Multi-task
  Adversarial CNNs To Improve Generalization
Learning Interpretable Microscopic Features of Tumor by Multi-task Adversarial CNNs To Improve Generalization
Mara Graziani
Sebastian Otalora
Stephane Marchand-Maillet
Henning Muller
Vincent Andrearczyk
AAML
MedIm
16
0
0
04 Aug 2020
DeepVA: Bridging Cognition and Computation through Semantic Interaction
  and Deep Learning
DeepVA: Bridging Cognition and Computation through Semantic Interaction and Deep Learning
Yail Bian
John E. Wenskovitch
Chris North
8
11
0
31 Jul 2020
Debiasing Concept-based Explanations with Causal Analysis
Debiasing Concept-based Explanations with Causal Analysis
M. T. Bahadori
David Heckerman
FAtt
CML
6
38
0
22 Jul 2020
Melody: Generating and Visualizing Machine Learning Model Summary to
  Understand Data and Classifiers Together
Melody: Generating and Visualizing Machine Learning Model Summary to Understand Data and Classifiers Together
G. Chan
E. Bertini
L. G. Nonato
Brian Barr
Claudio T. Silva
20
17
0
21 Jul 2020
Learning Invariances for Interpretability using Supervised VAE
Learning Invariances for Interpretability using Supervised VAE
An-phi Nguyen
María Rodríguez Martínez
DRL
9
2
0
15 Jul 2020
On quantitative aspects of model interpretability
On quantitative aspects of model interpretability
An-phi Nguyen
María Rodríguez Martínez
11
114
0
15 Jul 2020
Explaining Deep Neural Networks using Unsupervised Clustering
Explaining Deep Neural Networks using Unsupervised Clustering
Yu-Han Liu
Sercan O. Arik
SSL
AI4CE
6
11
0
15 Jul 2020
Concept Learners for Few-Shot Learning
Concept Learners for Few-Shot Learning
Kaidi Cao
Maria Brbic
J. Leskovec
VLM
OffRL
22
4
0
14 Jul 2020
Towards causal benchmarking of bias in face analysis algorithms
Towards causal benchmarking of bias in face analysis algorithms
Guha Balakrishnan
Yuanjun Xiong
Wei Xia
Pietro Perona
CVBM
13
89
0
13 Jul 2020
A simple defense against adversarial attacks on heatmap explanations
A simple defense against adversarial attacks on heatmap explanations
Laura Rieger
Lars Kai Hansen
FAtt
AAML
25
37
0
13 Jul 2020
Locality Guided Neural Networks for Explainable Artificial Intelligence
Locality Guided Neural Networks for Explainable Artificial Intelligence
Randy Tan
N. Khan
L. Guan
11
8
0
12 Jul 2020
Concept Bottleneck Models
Concept Bottleneck Models
Pang Wei Koh
Thao Nguyen
Y. S. Tang
Stephen Mussmann
Emma Pierson
Been Kim
Percy Liang
13
776
0
09 Jul 2020
Drug discovery with explainable artificial intelligence
Drug discovery with explainable artificial intelligence
José Jiménez-Luna
F. Grisoni
G. Schneider
27
625
0
01 Jul 2020
Unifying Model Explainability and Robustness via Machine-Checkable
  Concepts
Unifying Model Explainability and Robustness via Machine-Checkable Concepts
Vedant Nanda
Till Speicher
John P. Dickerson
Krishna P. Gummadi
Muhammad Bilal Zafar
AAML
4
4
0
01 Jul 2020
Invertible Concept-based Explanations for CNN Models with Non-negative
  Concept Activation Vectors
Invertible Concept-based Explanations for CNN Models with Non-negative Concept Activation Vectors
Ruihan Zhang
Prashan Madumal
Tim Miller
Krista A. Ehinger
Benjamin I. P. Rubinstein
FAtt
14
94
0
27 Jun 2020
Causality Learning: A New Perspective for Interpretable Machine Learning
Causality Learning: A New Perspective for Interpretable Machine Learning
Guandong Xu
Tri Dung Duong
Q. Li
S. Liu
Xianzhi Wang
XAI
OOD
CML
8
51
0
27 Jun 2020
Generative causal explanations of black-box classifiers
Generative causal explanations of black-box classifiers
Matthew R. O’Shaughnessy
Gregory H. Canal
Marissa Connor
Mark A. Davenport
Christopher Rozell
CML
25
73
0
24 Jun 2020
Feature Interaction Interpretability: A Case for Explaining
  Ad-Recommendation Systems via Neural Interaction Detection
Feature Interaction Interpretability: A Case for Explaining Ad-Recommendation Systems via Neural Interaction Detection
Michael Tsang
Dehua Cheng
Hanpeng Liu
Xuening Feng
Eric Zhou
Yan Liu
FAtt
8
60
0
19 Jun 2020
A generalizable saliency map-based interpretation of model outcome
A generalizable saliency map-based interpretation of model outcome
Shailja Thakur
S. Fischmeister
AAML
FAtt
MILM
19
2
0
16 Jun 2020
Opportunities and Challenges in Explainable Artificial Intelligence
  (XAI): A Survey
Opportunities and Challenges in Explainable Artificial Intelligence (XAI): A Survey
Arun Das
P. Rad
XAI
16
593
0
16 Jun 2020
Fairness Under Feature Exemptions: Counterfactual and Observational
  Measures
Fairness Under Feature Exemptions: Counterfactual and Observational Measures
Sanghamitra Dutta
Praveen Venkatesh
Piotr (Peter) Mardziel
Anupam Datta
P. Grover
6
16
0
14 Jun 2020
Explaining Predictions by Approximating the Local Decision Boundary
Explaining Predictions by Approximating the Local Decision Boundary
G. Vlassopoulos
T. Erven
Henry Brighton
Vlado Menkovski
FAtt
9
8
0
14 Jun 2020
Aligning Faithful Interpretations with their Social Attribution
Aligning Faithful Interpretations with their Social Attribution
Alon Jacovi
Yoav Goldberg
10
105
0
01 Jun 2020
Explainable Artificial Intelligence: a Systematic Review
Explainable Artificial Intelligence: a Systematic Review
Giulia Vilone
Luca Longo
XAI
20
266
0
29 May 2020
Explainable deep learning models in medical image analysis
Explainable deep learning models in medical image analysis
Amitojdeep Singh
S. Sengupta
Vasudevan Lakshminarayanan
XAI
29
482
0
28 May 2020
Explaining Neural Networks by Decoding Layer Activations
Explaining Neural Networks by Decoding Layer Activations
Johannes Schneider
Michalis Vlachos
AI4CE
14
15
0
27 May 2020
CausaLM: Causal Model Explanation Through Counterfactual Language Models
CausaLM: Causal Model Explanation Through Counterfactual Language Models
Amir Feder
Nadav Oved
Uri Shalit
Roi Reichart
CML
LRM
36
156
0
27 May 2020
The challenges of deploying artificial intelligence models in a rapidly
  evolving pandemic
The challenges of deploying artificial intelligence models in a rapidly evolving pandemic
Yipeng Hu
J. Jacob
Geoffrey J. M. Parker
D. Hawkes
J. Hurst
Danail Stoyanov
OOD
14
65
0
19 May 2020
On Interpretability of Deep Learning based Skin Lesion Classifiers using
  Concept Activation Vectors
On Interpretability of Deep Learning based Skin Lesion Classifiers using Concept Activation Vectors
Adriano Lucieri
Muhammad Naseer Bajwa
S. Braun
M. I. Malik
Andreas Dengel
Sheraz Ahmed
MedIm
161
64
0
05 May 2020
Evaluating Explainable AI: Which Algorithmic Explanations Help Users
  Predict Model Behavior?
Evaluating Explainable AI: Which Algorithmic Explanations Help Users Predict Model Behavior?
Peter Hase
Mohit Bansal
FAtt
18
298
0
04 May 2020
Do Gradient-based Explanations Tell Anything About Adversarial
  Robustness to Android Malware?
Do Gradient-based Explanations Tell Anything About Adversarial Robustness to Android Malware?
Marco Melis
Michele Scalas
Ambra Demontis
Davide Maiorca
Battista Biggio
Giorgio Giacinto
Fabio Roli
AAML
FAtt
14
27
0
04 May 2020
Explaining AI-based Decision Support Systems using Concept Localization
  Maps
Explaining AI-based Decision Support Systems using Concept Localization Maps
Adriano Lucieri
Muhammad Naseer Bajwa
Andreas Dengel
Sheraz Ahmed
19
26
0
04 May 2020
Explainable Deep Learning: A Field Guide for the Uninitiated
Explainable Deep Learning: A Field Guide for the Uninitiated
Gabrielle Ras
Ning Xie
Marcel van Gerven
Derek Doran
AAML
XAI
29
371
0
30 Apr 2020
Corpus-level and Concept-based Explanations for Interpretable Document
  Classification
Corpus-level and Concept-based Explanations for Interpretable Document Classification
Tian Shi
Xuchao Zhang
Ping Wang
Chandan K. Reddy
FAtt
16
8
0
24 Apr 2020
Adversarial Attacks and Defenses: An Interpretation Perspective
Adversarial Attacks and Defenses: An Interpretation Perspective
Ninghao Liu
Mengnan Du
Ruocheng Guo
Huan Liu
Xia Hu
AAML
26
8
0
23 Apr 2020
Towards Faithfully Interpretable NLP Systems: How should we define and
  evaluate faithfulness?
Towards Faithfully Interpretable NLP Systems: How should we define and evaluate faithfulness?
Alon Jacovi
Yoav Goldberg
XAI
6
564
0
07 Apr 2020
MetaPoison: Practical General-purpose Clean-label Data Poisoning
MetaPoison: Practical General-purpose Clean-label Data Poisoning
W. R. Huang
Jonas Geiping
Liam H. Fowl
Gavin Taylor
Tom Goldstein
9
188
0
01 Apr 2020
Architecture Disentanglement for Deep Neural Networks
Architecture Disentanglement for Deep Neural Networks
Jie Hu
Liujuan Cao
QiXiang Ye
Tong Tong
Shengchuan Zhang
Ke Li
Feiyue Huang
Rongrong Ji
Ling Shao
AAML
23
16
0
30 Mar 2020
A Survey of Deep Learning for Scientific Discovery
A Survey of Deep Learning for Scientific Discovery
M. Raghu
Erica Schmidt
OOD
AI4CE
35
120
0
26 Mar 2020
RelatIF: Identifying Explanatory Training Examples via Relative
  Influence
RelatIF: Identifying Explanatory Training Examples via Relative Influence
Elnaz Barshan
Marc-Etienne Brunet
Gintare Karolina Dziugaite
TDI
27
30
0
25 Mar 2020
Explaining Deep Neural Networks and Beyond: A Review of Methods and
  Applications
Explaining Deep Neural Networks and Beyond: A Review of Methods and Applications
Wojciech Samek
G. Montavon
Sebastian Lapuschkin
Christopher J. Anders
K. Müller
XAI
44
82
0
17 Mar 2020
Metafeatures-based Rule-Extraction for Classifiers on Behavioral and
  Textual Data
Metafeatures-based Rule-Extraction for Classifiers on Behavioral and Textual Data
Yanou Ramon
David Martens
Theodoros Evgeniou
S. Praet
6
8
0
10 Mar 2020
Explaining Knowledge Distillation by Quantifying the Knowledge
Explaining Knowledge Distillation by Quantifying the Knowledge
Xu Cheng
Zhefan Rao
Yilan Chen
Quanshi Zhang
10
119
0
07 Mar 2020
The Emerging Landscape of Explainable AI Planning and Decision Making
The Emerging Landscape of Explainable AI Planning and Decision Making
Tathagata Chakraborti
S. Sreedharan
S. Kambhampati
27
112
0
26 Feb 2020
Neuron Shapley: Discovering the Responsible Neurons
Neuron Shapley: Discovering the Responsible Neurons
Amirata Ghorbani
James Y. Zou
FAtt
TDI
25
108
0
23 Feb 2020
Bayes-TrEx: a Bayesian Sampling Approach to Model Transparency by
  Example
Bayes-TrEx: a Bayesian Sampling Approach to Model Transparency by Example
Serena Booth
Yilun Zhou
Ankit J. Shah
J. Shah
BDL
4
2
0
19 Feb 2020
HypoML: Visual Analysis for Hypothesis-based Evaluation of Machine
  Learning Models
HypoML: Visual Analysis for Hypothesis-based Evaluation of Machine Learning Models
Qianwen Wang
W. Alexander
J. Pegg
Huamin Qu
Min Chen
VLM
17
10
0
12 Feb 2020
Previous
123...18192021
Next