ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1711.11279
  4. Cited By
Interpretability Beyond Feature Attribution: Quantitative Testing with
  Concept Activation Vectors (TCAV)

Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV)

30 November 2017
Been Kim
Martin Wattenberg
Justin Gilmer
Carrie J. Cai
James Wexler
F. Viégas
Rory Sayres
    FAtt
ArXivPDFHTML

Papers citing "Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV)"

50 / 1,046 papers shown
Title
Axe the X in XAI: A Plea for Understandable AI
Axe the X in XAI: A Plea for Understandable AI
Andrés Páez
11
0
0
01 Mar 2024
ProtoP-OD: Explainable Object Detection with Prototypical Parts
ProtoP-OD: Explainable Object Detection with Prototypical Parts
Pavlos Rath-Manakidis
Frederik Strothmann
Tobias Glasmachers
Laurenz Wiskott
ViT
35
1
0
29 Feb 2024
How to Train your Antivirus: RL-based Hardening through the
  Problem-Space
How to Train your Antivirus: RL-based Hardening through the Problem-Space
Jacopo Cortellazzi
Ilias Tsingenopoulos
B. Bosanský
Simone Aonzo
Davy Preuveneers
Wouter Joosen
Fabio Pierazzi
Lorenzo Cavallaro
21
2
0
29 Feb 2024
WWW: A Unified Framework for Explaining What, Where and Why of Neural
  Networks by Interpretation of Neuron Concepts
WWW: A Unified Framework for Explaining What, Where and Why of Neural Networks by Interpretation of Neuron Concepts
Yong Hyun Ahn
Hyeon Bae Kim
Seong Tae Kim
34
4
0
29 Feb 2024
Understanding the Role of Pathways in a Deep Neural Network
Understanding the Role of Pathways in a Deep Neural Network
Lei Lyu
Chen Pang
Jihua Wang
27
3
0
28 Feb 2024
Incorporating Expert Rules into Neural Networks in the Framework of
  Concept-Based Learning
Incorporating Expert Rules into Neural Networks in the Framework of Concept-Based Learning
A. Konstantinov
Lev V. Utkin
38
3
0
22 Feb 2024
A hierarchical decomposition for explaining ML performance discrepancies
A hierarchical decomposition for explaining ML performance discrepancies
Jean Feng
Harvineet Singh
Fan Xia
Adarsh Subbaswamy
Alexej Gossmann
CML
30
0
0
22 Feb 2024
Understanding the Dataset Practitioners Behind Large Language Model
  Development
Understanding the Dataset Practitioners Behind Large Language Model Development
Crystal Qian
Emily Reif
Minsuk Kahng
39
3
0
21 Feb 2024
Identifying Semantic Induction Heads to Understand In-Context Learning
Identifying Semantic Induction Heads to Understand In-Context Learning
Jie Ren
Qipeng Guo
Hang Yan
Dongrui Liu
Xipeng Qiu
Dahua Lin
27
24
0
20 Feb 2024
Prospector Heads: Generalized Feature Attribution for Large Models &
  Data
Prospector Heads: Generalized Feature Attribution for Large Models & Data
Gautam Machiraju
Alexander Derry
Arjun D Desai
Neel Guha
Amir-Hossein Karimi
James Zou
Russ Altman
Christopher Ré
Parag Mallick
AI4TS
MedIm
45
0
0
18 Feb 2024
Interpreting CLIP with Sparse Linear Concept Embeddings (SpLiCE)
Interpreting CLIP with Sparse Linear Concept Embeddings (SpLiCE)
Usha Bhalla
Alexander X. Oesterling
Suraj Srinivas
Flavio du Pin Calmon
Himabindu Lakkaraju
36
35
0
16 Feb 2024
Explaining Probabilistic Models with Distributional Values
Explaining Probabilistic Models with Distributional Values
Luca Franceschi
Michele Donini
Cédric Archambeau
Matthias Seeger
FAtt
32
2
0
15 Feb 2024
Learning Interpretable Concepts: Unifying Causal Representation Learning
  and Foundation Models
Learning Interpretable Concepts: Unifying Causal Representation Learning and Foundation Models
Goutham Rajendran
Simon Buchholz
Bryon Aragam
Bernhard Schölkopf
Pradeep Ravikumar
AI4CE
91
21
0
14 Feb 2024
Implementing local-explainability in Gradient Boosting Trees: Feature
  Contribution
Implementing local-explainability in Gradient Boosting Trees: Feature Contribution
Ángel Delgado-Panadero
Beatriz Hernández-Lorca
María Teresa García-Ordás
J. Benítez-Andrades
32
52
0
14 Feb 2024
Advancing Explainable AI Toward Human-Like Intelligence: Forging the
  Path to Artificial Brain
Advancing Explainable AI Toward Human-Like Intelligence: Forging the Path to Artificial Brain
Yongchen Zhou
Richard Jiang
24
2
0
07 Feb 2024
InterpretCC: Intrinsic User-Centric Interpretability through Global
  Mixture of Experts
InterpretCC: Intrinsic User-Centric Interpretability through Global Mixture of Experts
Vinitra Swamy
Syrielle Montariol
Julian Blackwell
Jibril Frej
Martin Jaggi
Tanja Kaser
31
3
0
05 Feb 2024
Focal Modulation Networks for Interpretable Sound Classification
Focal Modulation Networks for Interpretable Sound Classification
Luca Della Libera
Cem Subakan
Mirco Ravanelli
28
2
0
05 Feb 2024
XAI for Skin Cancer Detection with Prototypes and Non-Expert Supervision
XAI for Skin Cancer Detection with Prototypes and Non-Expert Supervision
Miguel Correia
Alceu Bissoto
Carlos Santiago
Catarina Barata
29
0
0
02 Feb 2024
Rethinking Interpretability in the Era of Large Language Models
Rethinking Interpretability in the Era of Large Language Models
Chandan Singh
J. Inala
Michel Galley
Rich Caruana
Jianfeng Gao
LRM
AI4CE
77
61
0
30 Jan 2024
Bridging Human Concepts and Computer Vision for Explainable Face
  Verification
Bridging Human Concepts and Computer Vision for Explainable Face Verification
Miriam Doh
Caroline Mazini Rodrigues
Nicolas Boutry
Laurent Najman
M. Mancas
H. Bersini
CVBM
27
0
0
30 Jan 2024
Defining and Extracting generalizable interaction primitives from DNNs
Defining and Extracting generalizable interaction primitives from DNNs
Lu Chen
Siyu Lou
Benhao Huang
Quanshi Zhang
29
9
0
29 Jan 2024
On the Emergence of Symmetrical Reality
On the Emergence of Symmetrical Reality
Zhenlian Zhang
Zeyu Zhang
Ziyuan Jiao
Yao Su
Hangxin Liu
Wei Wang
Song-Chun Zhu
18
4
0
26 Jan 2024
Black-Box Access is Insufficient for Rigorous AI Audits
Black-Box Access is Insufficient for Rigorous AI Audits
Stephen Casper
Carson Ezell
Charlotte Siegmann
Noam Kolt
Taylor Lynn Curtis
...
Michael Gerovitch
David Bau
Max Tegmark
David M. Krueger
Dylan Hadfield-Menell
AAML
34
78
0
25 Jan 2024
Respect the model: Fine-grained and Robust Explanation with Sharing
  Ratio Decomposition
Respect the model: Fine-grained and Robust Explanation with Sharing Ratio Decomposition
Sangyu Han
Yearim Kim
Nojun Kwak
AAML
26
1
0
25 Jan 2024
Beyond Concept Bottleneck Models: How to Make Black Boxes Intervenable?
Beyond Concept Bottleneck Models: How to Make Black Boxes Intervenable?
Sonia Laguna
Ricards Marcinkevics
Moritz Vandenhirtz
Julia E. Vogt
30
17
0
24 Jan 2024
Visual Objectification in Films: Towards a New AI Task for Video
  Interpretation
Visual Objectification in Films: Towards a New AI Task for Video Interpretation
Julie Tores
L. Sassatelli
Hui-Yin Wu
Clement Bergman
Lea Andolfi
...
F. Precioso
Thierry Devars
Magali Guaresi
Virginie Julliard
Sarah Lecossais
35
2
0
24 Jan 2024
Understanding Video Transformers via Universal Concept Discovery
Understanding Video Transformers via Universal Concept Discovery
M. Kowal
Achal Dave
Rares Ambrus
Adrien Gaidon
Konstantinos G. Derpanis
P. Tokmakov
ViT
37
8
0
19 Jan 2024
Deep spatial context: when attention-based models meet spatial
  regression
Deep spatial context: when attention-based models meet spatial regression
Paulina Tomaszewska
El.zbieta Sienkiewicz
Mai P. Hoang
Przemysław Biecek
15
1
0
18 Jan 2024
DiConStruct: Causal Concept-based Explanations through Black-Box
  Distillation
DiConStruct: Causal Concept-based Explanations through Black-Box Distillation
Ricardo Moreira
Jacopo Bono
Mário Cardoso
Pedro Saleiro
Mário A. T. Figueiredo
P. Bizarro
CML
28
4
0
16 Jan 2024
MICA: Towards Explainable Skin Lesion Diagnosis via Multi-Level
  Image-Concept Alignment
MICA: Towards Explainable Skin Lesion Diagnosis via Multi-Level Image-Concept Alignment
Yequan Bie
Luyang Luo
Hao Chen
24
14
0
16 Jan 2024
An Axiomatic Approach to Model-Agnostic Concept Explanations
An Axiomatic Approach to Model-Agnostic Concept Explanations
Zhili Feng
Michal Moshkovitz
Dotan Di Castro
J. Zico Kolter
LRM
23
0
0
12 Jan 2024
Decoupling Pixel Flipping and Occlusion Strategy for Consistent XAI
  Benchmarks
Decoupling Pixel Flipping and Occlusion Strategy for Consistent XAI Benchmarks
Stefan Blücher
Johanna Vielhaben
Nils Strodthoff
AAML
66
20
0
12 Jan 2024
Patchscopes: A Unifying Framework for Inspecting Hidden Representations
  of Language Models
Patchscopes: A Unifying Framework for Inspecting Hidden Representations of Language Models
Asma Ghandeharioun
Avi Caciularu
Adam Pearce
Lucas Dixon
Mor Geva
34
87
0
11 Jan 2024
The two-way knowledge interaction interface between humans and neural
  networks
The two-way knowledge interaction interface between humans and neural networks
Zhanliang He
Nuoye Xiong
Hongsheng Li
Peiyi Shen
Guangming Zhu
Liang Zhang
HAI
15
0
0
10 Jan 2024
Concept Alignment
Concept Alignment
Sunayana Rane
Polyphony J. Bruna
Ilia Sucholutsky
Christopher Kello
Thomas L. Griffiths
CVBM
31
7
0
09 Jan 2024
Towards Explainable Artificial Intelligence (XAI): A Data Mining
  Perspective
Towards Explainable Artificial Intelligence (XAI): A Data Mining Perspective
Haoyi Xiong
Xuhong Li
Xiaofei Zhang
Jiamin Chen
Xinhao Sun
Yuchen Li
Zeyi Sun
Mengnan Du
XAI
40
8
0
09 Jan 2024
Do Concept Bottleneck Models Obey Locality?
Do Concept Bottleneck Models Obey Locality?
Naveen Raman
M. Zarlenga
Juyeon Heo
M. Jamnik
36
7
0
02 Jan 2024
Understanding Distributed Representations of Concepts in Deep Neural
  Networks without Supervision
Understanding Distributed Representations of Concepts in Deep Neural Networks without Supervision
Wonjoon Chang
Dahee Kwon
Jaesik Choi
19
1
0
28 Dec 2023
Observable Propagation: Uncovering Feature Vectors in Transformers
Observable Propagation: Uncovering Feature Vectors in Transformers
Jacob Dunefsky
Arman Cohan
35
2
0
26 Dec 2023
Anomaly component analysis
Anomaly component analysis
Romain Valla
Pavlo Mozharovskyi
Florence dÁlché-Buc
19
0
0
26 Dec 2023
Q-SENN: Quantized Self-Explaining Neural Networks
Q-SENN: Quantized Self-Explaining Neural Networks
Thomas Norrenbrock
Marco Rudolph
Bodo Rosenhahn
FAtt
AAML
MILM
25
6
0
21 Dec 2023
Concept-based Explainable Artificial Intelligence: A Survey
Concept-based Explainable Artificial Intelligence: A Survey
Eleonora Poeta
Gabriele Ciravegna
Eliana Pastor
Tania Cerquitelli
Elena Baralis
LRM
XAI
21
41
0
20 Dec 2023
CEIR: Concept-based Explainable Image Representation Learning
CEIR: Concept-based Explainable Image Representation Learning
Yan Cui
Shuhong Liu
Liuzhuozheng Li
Zhiyuan Yuan
SSL
VLM
26
3
0
17 Dec 2023
Rethinking Robustness of Model Attributions
Rethinking Robustness of Model Attributions
Sandesh Kamath
Sankalp Mittal
Amit Deshpande
Vineeth N. Balasubramanian
22
2
0
16 Dec 2023
Estimation of Concept Explanations Should be Uncertainty Aware
Estimation of Concept Explanations Should be Uncertainty Aware
Vihari Piratla
Juyeon Heo
Katherine M. Collins
Sukriti Singh
Adrian Weller
24
1
0
13 Dec 2023
Evaluating the Utility of Model Explanations for Model Development
Evaluating the Utility of Model Explanations for Model Development
Shawn Im
Jacob Andreas
Yilun Zhou
XAI
FAtt
ELM
19
1
0
10 Dec 2023
Finding Concept Representations in Neural Networks with Self-Organizing
  Maps
Finding Concept Representations in Neural Networks with Self-Organizing Maps
Mathieu dÁquin
MILM
19
1
0
10 Dec 2023
Artificial Neural Nets and the Representation of Human Concepts
Artificial Neural Nets and the Representation of Human Concepts
Timo Freiesleben
NAI
22
1
0
08 Dec 2023
SoK: Unintended Interactions among Machine Learning Defenses and Risks
SoK: Unintended Interactions among Machine Learning Defenses and Risks
Vasisht Duddu
S. Szyller
Nadarajah Asokan
AAML
47
2
0
07 Dec 2023
Class-Discriminative Attention Maps for Vision Transformers
Class-Discriminative Attention Maps for Vision Transformers
L. Brocki
Jakub Binda
N. C. Chung
MedIm
30
3
0
04 Dec 2023
Previous
123...567...192021
Next