ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2309.07742
  4. Cited By
Interpretability is in the Mind of the Beholder: A Causal Framework for
  Human-interpretable Representation Learning

Interpretability is in the Mind of the Beholder: A Causal Framework for Human-interpretable Representation Learning

14 September 2023
Emanuele Marconato
Andrea Passerini
Stefano Teso
ArXivPDFHTML

Papers citing "Interpretability is in the Mind of the Beholder: A Causal Framework for Human-interpretable Representation Learning"

11 / 11 papers shown
Title
If Concept Bottlenecks are the Question, are Foundation Models the Answer?
If Concept Bottlenecks are the Question, are Foundation Models the Answer?
Nicola Debole
Pietro Barbiero
Francesco Giannini
Andrea Passerini
Stefano Teso
Emanuele Marconato
62
0
0
28 Apr 2025
Shortcuts and Identifiability in Concept-based Models from a Neuro-Symbolic Lens
Shortcuts and Identifiability in Concept-based Models from a Neuro-Symbolic Lens
Samuele Bortolotti
Emanuele Marconato
Paolo Morettin
Andrea Passerini
Stefano Teso
53
2
0
16 Feb 2025
Finding Alignments Between Interpretable Causal Variables and
  Distributed Neural Representations
Finding Alignments Between Interpretable Causal Variables and Distributed Neural Representations
Atticus Geiger
Zhengxuan Wu
Christopher Potts
Thomas F. Icard
Noah D. Goodman
CML
73
98
0
05 Mar 2023
GlanceNets: Interpretabile, Leak-proof Concept-based Models
GlanceNets: Interpretabile, Leak-proof Concept-based Models
Emanuele Marconato
Andrea Passerini
Stefano Teso
96
64
0
31 May 2022
Post-hoc Concept Bottleneck Models
Post-hoc Concept Bottleneck Models
Mert Yuksekgonul
Maggie Wang
James Y. Zou
133
182
0
31 May 2022
Interactive Disentanglement: Learning Concepts by Interacting with their
  Prototype Representations
Interactive Disentanglement: Learning Concepts by Interacting with their Prototype Representations
Wolfgang Stammer
Marius Memmel
P. Schramowski
Kristian Kersting
76
25
0
04 Dec 2021
Coherent Hierarchical Multi-Label Classification Networks
Coherent Hierarchical Multi-Label Classification Networks
Eleonora Giunchiglia
Thomas Lukasiewicz
AILaw
24
94
0
20 Oct 2020
Conditional Gaussian Distribution Learning for Open Set Recognition
Conditional Gaussian Distribution Learning for Open Set Recognition
Xin Sun
Zhen Yang
Chi Zhang
Guohao Peng
K. Ling
BDL
UQCV
139
216
0
19 Mar 2020
Weakly-Supervised Disentanglement Without Compromises
Weakly-Supervised Disentanglement Without Compromises
Francesco Locatello
Ben Poole
Gunnar Rätsch
Bernhard Schölkopf
Olivier Bachem
Michael Tschannen
CoGe
OOD
DRL
171
313
0
07 Feb 2020
On Completeness-aware Concept-Based Explanations in Deep Neural Networks
On Completeness-aware Concept-Based Explanations in Deep Neural Networks
Chih-Kuan Yeh
Been Kim
Sercan Ö. Arik
Chun-Liang Li
Tomas Pfister
Pradeep Ravikumar
FAtt
120
293
0
17 Oct 2019
Logic Tensor Networks for Semantic Image Interpretation
Logic Tensor Networks for Semantic Image Interpretation
Ivan Donadello
Luciano Serafini
Artur Garcez
54
209
0
24 May 2017
1