Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2203.13909
Cited By
Concept Embedding Analysis: A Review
25 March 2022
Gesina Schwalbe
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Concept Embedding Analysis: A Review"
20 / 20 papers shown
Title
If Concept Bottlenecks are the Question, are Foundation Models the Answer?
Nicola Debole
Pietro Barbiero
Francesco Giannini
Andrea Passerini
Stefano Teso
Emanuele Marconato
104
0
0
28 Apr 2025
On Background Bias of Post-Hoc Concept Embeddings in Computer Vision DNNs
Gesina Schwalbe
Georgii Mikriukov
Edgar Heinert
Stavros Gerolymatos
Mert Keser
Alois Knoll
Matthias Rottmann
Annika Mütze
31
0
0
11 Apr 2025
Explaining Domain Shifts in Language: Concept erasing for Interpretable Image Classification
Zequn Zeng
Yudi Su
Jianqiao Sun
Tiansheng Wen
Hao Zhang
Zhengjue Wang
Bo Chen
Hongwei Liu
Jiawei Ma
VLM
58
0
0
24 Mar 2025
Shortcuts and Identifiability in Concept-based Models from a Neuro-Symbolic Lens
Samuele Bortolotti
Emanuele Marconato
Paolo Morettin
Andrea Passerini
Stefano Teso
53
2
0
16 Feb 2025
Unveiling Ontological Commitment in Multi-Modal Foundation Models
Mert Keser
Gesina Schwalbe
Niki Amini-Naieni
Matthias Rottmann
Alois Knoll
21
1
0
25 Sep 2024
Concept-Based Explanations in Computer Vision: Where Are We and Where Could We Go?
Jae Hee Lee
Georgii Mikriukov
Gesina Schwalbe
Stefan Wermter
D. Wolter
48
2
0
20 Sep 2024
Incremental Residual Concept Bottleneck Models
Chenming Shang
Shiji Zhou
Hengyuan Zhang
Xinzhe Ni
Yujiu Yang
Yuwang Wang
34
14
0
13 Apr 2024
Understanding Multimodal Deep Neural Networks: A Concept Selection View
Chenming Shang
Hengyuan Zhang
Hao Wen
Yujiu Yang
38
5
0
13 Apr 2024
Enhancing Interpretability of Vertebrae Fracture Grading using Human-interpretable Prototypes
Poulami Sinhamahapatra
Suprosanna Shit
Anjany Sekuboyina
M. Husseini
D. Schinz
Nicolas Lenhart
Bjoern H. Menze
Jan Kirschke
Karsten Roscher
Stephan Guennemann
37
1
0
03 Apr 2024
Concept Distillation: Leveraging Human-Centered Explanations for Model Improvement
Avani Gupta
Saurabh Saini
P. J. Narayanan
23
6
0
26 Nov 2023
From Neural Activations to Concepts: A Survey on Explaining Concepts in Neural Networks
Jae Hee Lee
Sergio Lanza
Stefan Wermter
14
8
0
18 Oct 2023
Interpretability is in the Mind of the Beholder: A Causal Framework for Human-interpretable Representation Learning
Emanuele Marconato
Andrea Passerini
Stefano Teso
14
13
0
14 Sep 2023
How Faithful are Self-Explainable GNNs?
Marc Christiansen
Lea Villadsen
Zhiqiang Zhong
Stefano Teso
Davide Mottin
18
3
0
29 Aug 2023
A Unified Concept-Based System for Local, Global, and Misclassification Explanations
Fatemeh Aghaeipoor
D. Asgarian
Mohammad Sabokrou
FAtt
19
0
0
06 Jun 2023
Revealing Similar Semantics Inside CNNs: An Interpretable Concept-based Comparison of Feature Spaces
Georgii Mikriukov
Gesina Schwalbe
Christian Hellert
Korinna Bade
11
2
0
30 Apr 2023
Evaluating the Stability of Semantic Concept Representations in CNNs for Robust Explainability
Georgii Mikriukov
Gesina Schwalbe
Christian Hellert
Korinna Bade
FAtt
16
8
0
28 Apr 2023
Changes from Classical Statistics to Modern Statistics and Data Science
Kai Zhang
Shan-Yu Liu
M. Xiong
26
0
0
30 Oct 2022
LAP: An Attention-Based Module for Concept Based Self-Interpretation and Knowledge Injection in Convolutional Neural Networks
Rassa Ghavami Modegh
Ahmadali Salimi
Alireza Dizaji
Hamid R. Rabiee
FAtt
22
0
0
27 Jan 2022
Weakly Supervised Multi-task Learning for Concept-based Explainability
Catarina Belém
Vladimir Balayan
Pedro Saleiro
P. Bizarro
73
10
0
26 Apr 2021
On Completeness-aware Concept-Based Explanations in Deep Neural Networks
Chih-Kuan Yeh
Been Kim
Sercan Ö. Arik
Chun-Liang Li
Tomas Pfister
Pradeep Ravikumar
FAtt
120
297
0
17 Oct 2019
1