Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2211.10154
Cited By
CRAFT: Concept Recursive Activation FacTorization for Explainability
17 November 2022
Thomas Fel
Agustin Picard
Louis Bethune
Thibaut Boissin
David Vigouroux
Julien Colin
Rémi Cadène
Thomas Serre
Re-assign community
ArXiv
PDF
HTML
Papers citing
"CRAFT: Concept Recursive Activation FacTorization for Explainability"
18 / 18 papers shown
Title
Representational Similarity via Interpretable Visual Concepts
Neehar Kondapaneni
Oisin Mac Aodha
Pietro Perona
DRL
74
0
0
19 Mar 2025
Show and Tell: Visually Explainable Deep Neural Nets via Spatially-Aware Concept Bottleneck Models
Itay Benou
Tammy Riklin-Raviv
59
0
0
27 Feb 2025
Universal Sparse Autoencoders: Interpretable Cross-Model Concept Alignment
Harrish Thasarathan
Julian Forsyth
Thomas Fel
M. Kowal
Konstantinos G. Derpanis
97
7
0
06 Feb 2025
COMIX: Compositional Explanations using Prototypes
S. Sivaprasad
D. Kangin
Plamen Angelov
Mario Fritz
61
0
0
10 Jan 2025
Explaining the Impact of Training on Vision Models via Activation Clustering
Ahcène Boubekki
Samuel G. Fadel
Sebastian Mair
89
0
0
29 Nov 2024
GIFT: A Framework for Global Interpretable Faithful Textual Explanations of Vision Classifiers
Éloi Zablocki
Valentin Gerard
Amaia Cardiel
Eric Gaussier
Matthieu Cord
Eduardo Valle
69
0
0
23 Nov 2024
Restyling Unsupervised Concept Based Interpretable Networks with Generative Models
Jayneel Parekh
Quentin Bouniot
Pavlo Mozharovskyi
A. Newson
Florence dÁlché-Buc
SSL
51
1
0
01 Jul 2024
Generating Counterfactual Trajectories with Latent Diffusion Models for Concept Discovery
Payal Varshney
Adriano Lucieri
Christoph Balada
Andreas Dengel
Sheraz Ahmed
MedIm
DiffM
36
3
0
16 Apr 2024
Natural Example-Based Explainability: a Survey
Antonin Poché
Lucas Hervier
M. Bakkay
XAI
13
11
0
05 Sep 2023
Uncovering Unique Concept Vectors through Latent Space Decomposition
Mara Graziani
Laura Mahony
An-phi Nguyen
Henning Muller
Vincent Andrearczyk
31
4
0
13 Jul 2023
Performance-optimized deep neural networks are evolving into worse models of inferotemporal visual cortex
Drew Linsley
I. F. Rodriguez
Thomas Fel
Michael Arcaro
Saloni Sharma
Margaret Livingstone
Thomas Serre
22
18
0
06 Jun 2023
Disentangled Explanations of Neural Network Predictions by Finding Relevant Subspaces
Pattarawat Chormai
J. Herrmann
Klaus-Robert Muller
G. Montavon
FAtt
43
17
0
30 Dec 2022
Toy Models of Superposition
Nelson Elhage
Tristan Hume
Catherine Olsson
Nicholas Schiefer
T. Henighan
...
Sam McCandlish
Jared Kaplan
Dario Amodei
Martin Wattenberg
C. Olah
AAML
MILM
120
314
0
21 Sep 2022
Navigating Neural Space: Revisiting Concept Activation Vectors to Overcome Directional Divergence
Frederik Pahde
Maximilian Dreyer
Leander Weber
Moritz Weckbecker
Christopher J. Anders
Thomas Wiegand
Wojciech Samek
Sebastian Lapuschkin
55
7
0
07 Feb 2022
HIVE: Evaluating the Human Interpretability of Visual Explanations
Sunnie S. Y. Kim
Nicole Meister
V. V. Ramaswamy
Ruth C. Fong
Olga Russakovsky
58
112
0
06 Dec 2021
Look at the Variance! Efficient Black-box Explanations with Sobol-based Sensitivity Analysis
Thomas Fel
Rémi Cadène
Mathieu Chalvidal
Matthieu Cord
David Vigouroux
Thomas Serre
MLAU
FAtt
AAML
112
57
0
07 Nov 2021
Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and Goals of Human Trust in AI
Alon Jacovi
Ana Marasović
Tim Miller
Yoav Goldberg
241
417
0
15 Oct 2020
Towards A Rigorous Science of Interpretable Machine Learning
Finale Doshi-Velez
Been Kim
XAI
FaML
225
3,658
0
28 Feb 2017
1