Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2006.15417
Cited By
Invertible Concept-based Explanations for CNN Models with Non-negative Concept Activation Vectors
27 June 2020
Ruihan Zhang
Prashan Madumal
Tim Miller
Krista A. Ehinger
Benjamin I. P. Rubinstein
FAtt
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Invertible Concept-based Explanations for CNN Models with Non-negative Concept Activation Vectors"
21 / 21 papers shown
Title
Representational Similarity via Interpretable Visual Concepts
Neehar Kondapaneni
Oisin Mac Aodha
Pietro Perona
DRL
163
0
0
19 Mar 2025
Show and Tell: Visually Explainable Deep Neural Nets via Spatially-Aware Concept Bottleneck Models
Itay Benou
Tammy Riklin-Raviv
67
0
0
27 Feb 2025
Universal Sparse Autoencoders: Interpretable Cross-Model Concept Alignment
Harrish Thasarathan
Julian Forsyth
Thomas Fel
M. Kowal
Konstantinos G. Derpanis
111
7
0
06 Feb 2025
GIFT: A Framework for Global Interpretable Faithful Textual Explanations of Vision Classifiers
Éloi Zablocki
Valentin Gerard
Amaia Cardiel
Eric Gaussier
Matthieu Cord
Eduardo Valle
79
0
0
23 Nov 2024
A Concept-Based Explainability Framework for Large Multimodal Models
Jayneel Parekh
Pegah Khayatan
Mustafa Shukor
A. Newson
Matthieu Cord
34
16
0
12 Jun 2024
Visual Evaluative AI: A Hypothesis-Driven Tool with Concept-Based Explanations and Weight of Evidence
Thao Le
Tim Miller
Ruihan Zhang
L. Sonenberg
Ronal Singh
31
0
0
13 May 2024
On the Value of Labeled Data and Symbolic Methods for Hidden Neuron Activation Analysis
Abhilekha Dalal
R. Rayan
Adrita Barua
Eugene Y. Vasserman
Md Kamruzzaman Sarker
Pascal Hitzler
27
4
0
21 Apr 2024
Generating Counterfactual Trajectories with Latent Diffusion Models for Concept Discovery
Payal Varshney
Adriano Lucieri
Christoph Balada
Andreas Dengel
Sheraz Ahmed
MedIm
DiffM
53
4
0
16 Apr 2024
Concept Distillation: Leveraging Human-Centered Explanations for Model Improvement
Avani Gupta
Saurabh Saini
P. J. Narayanan
25
6
0
26 Nov 2023
Natural Example-Based Explainability: a Survey
Antonin Poché
Lucas Hervier
M. Bakkay
XAI
26
11
0
05 Sep 2023
Identifying Interpretable Subspaces in Image Representations
N. Kalibhat
S. Bhardwaj
Bayan Bruss
Hamed Firooz
Maziar Sanjabi
S. Feizi
FAtt
37
26
0
20 Jul 2023
Does Localization Inform Editing? Surprising Differences in Causality-Based Localization vs. Knowledge Editing in Language Models
Peter Hase
Mohit Bansal
Been Kim
Asma Ghandeharioun
MILM
34
167
0
10 Jan 2023
Disentangled Explanations of Neural Network Predictions by Finding Relevant Subspaces
Pattarawat Chormai
J. Herrmann
Klaus-Robert Muller
G. Montavon
FAtt
45
17
0
30 Dec 2022
Concept-based Explanations using Non-negative Concept Activation Vectors and Decision Tree for CNN Models
Gayda Mutahar
Tim Miller
FAtt
24
6
0
19 Nov 2022
CRAFT: Concept Recursive Activation FacTorization for Explainability
Thomas Fel
Agustin Picard
Louis Bethune
Thibaut Boissin
David Vigouroux
Julien Colin
Rémi Cadène
Thomas Serre
19
102
0
17 Nov 2022
Concept-Based Techniques for "Musicologist-friendly" Explanations in a Deep Music Classifier
Francesco Foscarin
Katharina Hoedt
Verena Praher
A. Flexer
Gerhard Widmer
21
11
0
26 Aug 2022
Learnable Visual Words for Interpretable Image Recognition
Wenxi Xiao
Zhengming Ding
Hongfu Liu
VLM
17
2
0
22 May 2022
Do Users Benefit From Interpretable Vision? A User Study, Baseline, And Dataset
Leon Sixt
M. Schuessler
Oana-Iuliana Popescu
Philipp Weiß
Tim Landgraf
FAtt
26
14
0
25 Apr 2022
Concept Embedding Analysis: A Review
Gesina Schwalbe
24
28
0
25 Mar 2022
Sparse Subspace Clustering for Concept Discovery (SSCCD)
Johanna Vielhaben
Stefan Blücher
Nils Strodthoff
23
6
0
11 Mar 2022
Navigating Neural Space: Revisiting Concept Activation Vectors to Overcome Directional Divergence
Frederik Pahde
Maximilian Dreyer
Leander Weber
Moritz Weckbecker
Christopher J. Anders
Thomas Wiegand
Wojciech Samek
Sebastian Lapuschkin
57
7
0
07 Feb 2022
1