ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1711.11279
  4. Cited By
Interpretability Beyond Feature Attribution: Quantitative Testing with
  Concept Activation Vectors (TCAV)

Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV)

30 November 2017
Been Kim
Martin Wattenberg
Justin Gilmer
Carrie J. Cai
James Wexler
F. Viégas
Rory Sayres
    FAtt
ArXivPDFHTML

Papers citing "Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV)"

50 / 1,046 papers shown
Title
VAE-CE: Visual Contrastive Explanation using Disentangled VAEs
VAE-CE: Visual Contrastive Explanation using Disentangled VAEs
Y. Poels
Vlado Menkovski
CoGe
DRL
24
3
0
20 Aug 2021
Optimising Knee Injury Detection with Spatial Attention and Validating
  Localisation Ability
Optimising Knee Injury Detection with Spatial Attention and Validating Localisation Ability
Niamh Belton
I. Welaratne
Adil Dahlan
Ron Hearne
Misgina Tsighe Hagos
Aonghus Lawlor
Kathleen M. Curran
11
13
0
18 Aug 2021
Challenges for cognitive decoding using deep learning methods
Challenges for cognitive decoding using deep learning methods
A. Thomas
Christopher Ré
R. Poldrack
AI4CE
16
6
0
16 Aug 2021
Towards Visual Explainable Active Learning for Zero-Shot Classification
Towards Visual Explainable Active Learning for Zero-Shot Classification
Shichao Jia
Zeyu Li
Nuo Chen
Jiawan Zhang
VLM
18
22
0
15 Aug 2021
Finding Representative Interpretations on Convolutional Neural Networks
Finding Representative Interpretations on Convolutional Neural Networks
P. C. Lam
Lingyang Chu
Maxim Torgonskiy
J. Pei
Yong Zhang
Lanjun Wang
FAtt
SSL
HAI
22
6
0
13 Aug 2021
On the Explanatory Power of Decision Trees
On the Explanatory Power of Decision Trees
Gilles Audemard
S. Bellart
Louenas Bounia
F. Koriche
Jean-Marie Lagniez
Pierre Marquis
FAtt
14
11
0
11 Aug 2021
Logic Explained Networks
Logic Explained Networks
Gabriele Ciravegna
Pietro Barbiero
Francesco Giannini
Marco Gori
Pietro Lió
Marco Maggini
S. Melacci
37
69
0
11 Aug 2021
Post-hoc Interpretability for Neural NLP: A Survey
Post-hoc Interpretability for Neural NLP: A Survey
Andreas Madsen
Siva Reddy
A. Chandar
XAI
19
222
0
10 Aug 2021
Harnessing value from data science in business: ensuring explainability
  and fairness of solutions
Harnessing value from data science in business: ensuring explainability and fairness of solutions
Krzysztof Chomiak
Michal Miktus
13
0
0
10 Aug 2021
Human-in-the-loop Extraction of Interpretable Concepts in Deep Learning
  Models
Human-in-the-loop Extraction of Interpretable Concepts in Deep Learning Models
Zhenge Zhao
Panpan Xu
C. Scheidegger
Liu Ren
13
38
0
08 Aug 2021
Fairness Properties of Face Recognition and Obfuscation Systems
Fairness Properties of Face Recognition and Obfuscation Systems
Harrison Rosenberg
Brian Tang
Kassem Fawaz
S. Jha
PICV
8
14
0
05 Aug 2021
Reducing Unintended Bias of ML Models on Tabular and Textual Data
Reducing Unintended Bias of ML Models on Tabular and Textual Data
Guilherme Alves
M. Amblard
Fabien Bernier
Miguel Couceiro
A. Napoli
FaML
19
15
0
05 Aug 2021
Discovering User-Interpretable Capabilities of Black-Box Planning Agents
Discovering User-Interpretable Capabilities of Black-Box Planning Agents
Pulkit Verma
Shashank Rao Marpally
Siddharth Srivastava
ELM
LLMAG
8
20
0
28 Jul 2021
GCExplainer: Human-in-the-Loop Concept-based Explanations for Graph
  Neural Networks
GCExplainer: Human-in-the-Loop Concept-based Explanations for Graph Neural Networks
Lucie Charlotte Magister
Dmitry Kazhdan
Vikash Singh
Pietro Lió
27
48
0
25 Jul 2021
Using a Cross-Task Grid of Linear Probes to Interpret CNN Model
  Predictions On Retinal Images
Using a Cross-Task Grid of Linear Probes to Interpret CNN Model Predictions On Retinal Images
Katy Blumer
Subhashini Venugopalan
Michael P. Brenner
Jon M. Kleinberg
6
0
0
23 Jul 2021
Explainable artificial intelligence (XAI) in deep learning-based medical
  image analysis
Explainable artificial intelligence (XAI) in deep learning-based medical image analysis
Bas H. M. van der Velden
Hugo J. Kuijf
K. Gilhuijs
M. Viergever
XAI
29
636
0
22 Jul 2021
Roadmap of Designing Cognitive Metrics for Explainable Artificial
  Intelligence (XAI)
Roadmap of Designing Cognitive Metrics for Explainable Artificial Intelligence (XAI)
J. H. Hsiao
H. Ngai
Luyu Qiu
Yi Yang
Caleb Chen Cao
XAI
28
27
0
20 Jul 2021
Shared Interest: Measuring Human-AI Alignment to Identify Recurring
  Patterns in Model Behavior
Shared Interest: Measuring Human-AI Alignment to Identify Recurring Patterns in Model Behavior
Angie Boggust
Benjamin Hoover
Arvindmani Satyanarayan
Hendrik Strobelt
27
50
0
20 Jul 2021
One Map Does Not Fit All: Evaluating Saliency Map Explanation on
  Multi-Modal Medical Images
One Map Does Not Fit All: Evaluating Saliency Map Explanation on Multi-Modal Medical Images
Weina Jin
Xiaoxiao Li
Ghassan Hamarneh
FAtt
18
16
0
11 Jul 2021
Using Causal Analysis for Conceptual Deep Learning Explanation
Using Causal Analysis for Conceptual Deep Learning Explanation
Sumedha Singla
Stephen Wallace
Sofia Triantafillou
Kayhan Batmanghelich
CML
11
14
0
10 Jul 2021
When and How to Fool Explainable Models (and Humans) with Adversarial
  Examples
When and How to Fool Explainable Models (and Humans) with Adversarial Examples
Jon Vadillo
Roberto Santana
Jose A. Lozano
SILM
AAML
36
11
0
05 Jul 2021
Productivity, Portability, Performance: Data-Centric Python
Productivity, Portability, Performance: Data-Centric Python
Yiheng Wang
Yao Zhang
Yanzhang Wang
Yan Wan
Jiao Wang
Zhongyuan Wu
Yuhao Yang
Bowen She
52
94
0
01 Jul 2021
Promises and Pitfalls of Black-Box Concept Learning Models
Promises and Pitfalls of Black-Box Concept Learning Models
Anita Mahinpei
Justin Clark
Isaac Lage
Finale Doshi-Velez
Weiwei Pan
31
91
0
24 Jun 2021
Towards Fully Interpretable Deep Neural Networks: Are We There Yet?
Towards Fully Interpretable Deep Neural Networks: Are We There Yet?
Sandareka Wickramanayake
W. Hsu
M. Lee
FaML
AI4CE
13
3
0
24 Jun 2021
Meaningfully Debugging Model Mistakes using Conceptual Counterfactual
  Explanations
Meaningfully Debugging Model Mistakes using Conceptual Counterfactual Explanations
Abubakar Abid
Mert Yuksekgonul
James Y. Zou
CML
29
64
0
24 Jun 2021
Not all users are the same: Providing personalized explanations for
  sequential decision making problems
Not all users are the same: Providing personalized explanations for sequential decision making problems
Utkarsh Soni
S. Sreedharan
Subbarao Kambhampati
12
7
0
23 Jun 2021
Visual Probing: Cognitive Framework for Explaining Self-Supervised Image
  Representations
Visual Probing: Cognitive Framework for Explaining Self-Supervised Image Representations
Witold Oleszkiewicz
Dominika Basaj
Igor Sieradzki
Michal Górszczak
Barbara Rychalska
K. Lewandowska
Tomasz Trzciñski
Bartosz Zieliñski
SSL
29
3
0
21 Jun 2021
A Game-Theoretic Taxonomy of Visual Concepts in DNNs
A Game-Theoretic Taxonomy of Visual Concepts in DNNs
Xu Cheng
Chuntung Chu
Yi Zheng
Jie Ren
Quanshi Zhang
12
21
0
21 Jun 2021
Guided Integrated Gradients: An Adaptive Path Method for Removing Noise
Guided Integrated Gradients: An Adaptive Path Method for Removing Noise
A. Kapishnikov
Subhashini Venugopalan
Besim Avci
Benjamin D. Wedin
Michael Terry
Tolga Bolukbasi
30
90
0
17 Jun 2021
Model-Based Counterfactual Synthesizer for Interpretation
Model-Based Counterfactual Synthesizer for Interpretation
Fan Yang
Sahan Suresh Alva
Jiahao Chen
X. Hu
15
30
0
16 Jun 2021
Best of both worlds: local and global explanations with
  human-understandable concepts
Best of both worlds: local and global explanations with human-understandable concepts
Jessica Schrouff
Sebastien Baur
Shaobo Hou
Diana Mincu
Eric Loreaux
Ralph Blanes
James Wexler
Alan Karthikesalingam
Been Kim
FAtt
26
27
0
16 Jun 2021
Keep CALM and Improve Visual Feature Attribution
Keep CALM and Improve Visual Feature Attribution
Jae Myung Kim
Junsuk Choe
Zeynep Akata
Seong Joon Oh
FAtt
342
20
0
15 Jun 2021
Canonical Face Embeddings
Canonical Face Embeddings
David G. McNeely-White
Benjamin Sattelberg
Nathaniel Blanchard
Ross Beveridge
CVBM
24
7
0
15 Jun 2021
Entropy-based Logic Explanations of Neural Networks
Entropy-based Logic Explanations of Neural Networks
Pietro Barbiero
Gabriele Ciravegna
Francesco Giannini
Pietro Lió
Marco Gori
S. Melacci
FAtt
XAI
25
78
0
12 Jun 2021
Neural Networks for Partially Linear Quantile Regression
Neural Networks for Partially Linear Quantile Regression
Qixian Zhong
Jane-ling Wang
13
13
0
11 Jun 2021
Interpreting Expert Annotation Differences in Animal Behavior
Interpreting Expert Annotation Differences in Animal Behavior
Megan Tjandrasuwita
Jennifer J. Sun
Ann Kennedy
Swarat Chaudhuri
Yisong Yue
14
8
0
11 Jun 2021
Exploiting auto-encoders and segmentation methods for middle-level
  explanations of image classification systems
Exploiting auto-encoders and segmentation methods for middle-level explanations of image classification systems
Andrea Apicella
Salvatore Giugliano
Francesco Isgrò
R. Prevete
6
18
0
09 Jun 2021
Taxonomy of Machine Learning Safety: A Survey and Primer
Taxonomy of Machine Learning Safety: A Survey and Primer
Sina Mohseni
Haotao Wang
Zhiding Yu
Chaowei Xiao
Zhangyang Wang
J. Yadawa
21
31
0
09 Jun 2021
On the Evolution of Neuron Communities in a Deep Learning Architecture
On the Evolution of Neuron Communities in a Deep Learning Architecture
Sakib Mostafa
Debajyoti Mondal
GNN
19
3
0
08 Jun 2021
On the Lack of Robust Interpretability of Neural Text Classifiers
On the Lack of Robust Interpretability of Neural Text Classifiers
Muhammad Bilal Zafar
Michele Donini
Dylan Slack
Cédric Archambeau
Sanjiv Ranjan Das
K. Kenthapadi
AAML
11
21
0
08 Jun 2021
3DB: A Framework for Debugging Computer Vision Models
3DB: A Framework for Debugging Computer Vision Models
Guillaume Leclerc
Hadi Salman
Andrew Ilyas
Sai H. Vemprala
Logan Engstrom
...
Pengchuan Zhang
Shibani Santurkar
Greg Yang
Ashish Kapoor
A. Madry
40
40
0
07 Jun 2021
Finding and Fixing Spurious Patterns with Explanations
Finding and Fixing Spurious Patterns with Explanations
Gregory Plumb
Marco Tulio Ribeiro
Ameet Talwalkar
24
40
0
03 Jun 2021
DISSECT: Disentangled Simultaneous Explanations via Concept Traversals
DISSECT: Disentangled Simultaneous Explanations via Concept Traversals
Asma Ghandeharioun
Been Kim
Chun-Liang Li
Brendan Jou
B. Eoff
Rosalind W. Picard
AAML
25
53
0
31 May 2021
The Definitions of Interpretability and Learning of Interpretable Models
The Definitions of Interpretability and Learning of Interpretable Models
Weishen Pan
Changshui Zhang
FaML
XAI
11
3
0
29 May 2021
Do not explain without context: addressing the blind spot of model
  explanations
Do not explain without context: addressing the blind spot of model explanations
Katarzyna Wo'znica
Katarzyna Pkekala
Hubert Baniecki
Wojciech Kretowicz
El.zbieta Sienkiewicz
P. Biecek
28
1
0
28 May 2021
Bridging the Gap Between Explainable AI and Uncertainty Quantification
  to Enhance Trustability
Bridging the Gap Between Explainable AI and Uncertainty Quantification to Enhance Trustability
Dominik Seuss
17
15
0
25 May 2021
Explainable Machine Learning with Prior Knowledge: An Overview
Explainable Machine Learning with Prior Knowledge: An Overview
Katharina Beckh
Sebastian Müller
Matthias Jakobs
Vanessa Toborek
Hanxiao Tan
Raphael Fischer
Pascal Welke
Sebastian Houben
Laura von Rueden
XAI
22
28
0
21 May 2021
Expressive Explanations of DNNs by Combining Concept Analysis with ILP
Expressive Explanations of DNNs by Combining Concept Analysis with ILP
Johannes Rabold
Gesina Schwalbe
Ute Schmid
17
22
0
16 May 2021
A Comprehensive Taxonomy for Explainable Artificial Intelligence: A
  Systematic Survey of Surveys on Methods and Concepts
A Comprehensive Taxonomy for Explainable Artificial Intelligence: A Systematic Survey of Surveys on Methods and Concepts
Gesina Schwalbe
Bettina Finzel
XAI
26
184
0
15 May 2021
Cause and Effect: Hierarchical Concept-based Explanation of Neural
  Networks
Cause and Effect: Hierarchical Concept-based Explanation of Neural Networks
Mohammad Nokhbeh Zaeem
Majid Komeili
CML
10
9
0
14 May 2021
Previous
123...151617...192021
Next