ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1907.10882
  4. Cited By
Interpretability Beyond Classification Output: Semantic Bottleneck
  Networks

Interpretability Beyond Classification Output: Semantic Bottleneck Networks

25 July 2019
M. Losch
Mario Fritz
Bernt Schiele
    UQCV
ArXivPDFHTML

Papers citing "Interpretability Beyond Classification Output: Semantic Bottleneck Networks"

19 / 19 papers shown
Title
Show and Tell: Visually Explainable Deep Neural Nets via Spatially-Aware Concept Bottleneck Models
Show and Tell: Visually Explainable Deep Neural Nets via Spatially-Aware Concept Bottleneck Models
Itay Benou
Tammy Riklin-Raviv
67
0
0
27 Feb 2025
COMIX: Compositional Explanations using Prototypes
COMIX: Compositional Explanations using Prototypes
S. Sivaprasad
D. Kangin
Plamen Angelov
Mario Fritz
136
0
0
10 Jan 2025
Image-guided topic modeling for interpretable privacy classification
Image-guided topic modeling for interpretable privacy classification
Alina Elena Baia
Andrea Cavallaro
37
0
0
27 Sep 2024
Multi-Scale Grouped Prototypes for Interpretable Semantic Segmentation
Multi-Scale Grouped Prototypes for Interpretable Semantic Segmentation
Hugo Porta
Emanuele Dalsasso
Diego Marcos
D. Tuia
93
0
0
14 Sep 2024
DEPICT: Diffusion-Enabled Permutation Importance for Image
  Classification Tasks
DEPICT: Diffusion-Enabled Permutation Importance for Image Classification Tasks
Sarah Jabbour
Gregory Kondas
Ella Kazerooni
Michael Sjoding
David Fouhey
Jenna Wiens
FAtt
DiffM
47
1
0
19 Jul 2024
Understanding Multimodal Deep Neural Networks: A Concept Selection View
Understanding Multimodal Deep Neural Networks: A Concept Selection View
Chenming Shang
Hengyuan Zhang
Hao Wen
Yujiu Yang
43
5
0
13 Apr 2024
Sparse Concept Bottleneck Models: Gumbel Tricks in Contrastive Learning
Sparse Concept Bottleneck Models: Gumbel Tricks in Contrastive Learning
Andrei Semenov
Vladimir Ivanov
Aleksandr Beznosikov
Alexander Gasnikov
29
6
0
04 Apr 2024
Interpreting Pretrained Language Models via Concept Bottlenecks
Interpreting Pretrained Language Models via Concept Bottlenecks
Zhen Tan
Lu Cheng
Song Wang
Yuan Bo
Jundong Li
Huan Liu
LRM
29
20
0
08 Nov 2023
Hierarchical Explanations for Video Action Recognition
Hierarchical Explanations for Video Action Recognition
Sadaf Gulshad
Teng Long
N. V. Noord
FAtt
18
6
0
01 Jan 2023
Concept Embedding Analysis: A Review
Concept Embedding Analysis: A Review
Gesina Schwalbe
19
28
0
25 Mar 2022
Editing a classifier by rewriting its prediction rules
Editing a classifier by rewriting its prediction rules
Shibani Santurkar
Dimitris Tsipras
Mahalaxmi Elango
David Bau
Antonio Torralba
A. Madry
KELM
175
89
0
02 Dec 2021
Image Classification with Consistent Supporting Evidence
Image Classification with Consistent Supporting Evidence
Peiqi Wang
Ruizhi Liao
Daniel Moyer
Seth Berkowitz
Steven Horng
Polina Golland
34
2
0
13 Nov 2021
Toward a Unified Framework for Debugging Concept-based Models
Toward a Unified Framework for Debugging Concept-based Models
A. Bontempelli
Fausto Giunchiglia
Andrea Passerini
Stefano Teso
18
4
0
23 Sep 2021
Promises and Pitfalls of Black-Box Concept Learning Models
Promises and Pitfalls of Black-Box Concept Learning Models
Anita Mahinpei
Justin Clark
Isaac Lage
Finale Doshi-Velez
Weiwei Pan
31
91
0
24 Jun 2021
A Comprehensive Taxonomy for Explainable Artificial Intelligence: A
  Systematic Survey of Surveys on Methods and Concepts
A Comprehensive Taxonomy for Explainable Artificial Intelligence: A Systematic Survey of Surveys on Methods and Concepts
Gesina Schwalbe
Bettina Finzel
XAI
21
184
0
15 May 2021
Towards a Collective Agenda on AI for Earth Science Data Analysis
Towards a Collective Agenda on AI for Earth Science Data Analysis
D. Tuia
R. Roscher
Jan Dirk Wegner
Nathan Jacobs
Xiaoxiang Zhu
Gustau Camps-Valls
AI4CE
39
68
0
11 Apr 2021
Interpretable Machine Learning: Fundamental Principles and 10 Grand
  Challenges
Interpretable Machine Learning: Fundamental Principles and 10 Grand Challenges
Cynthia Rudin
Chaofan Chen
Zhi Chen
Haiyang Huang
Lesia Semenova
Chudi Zhong
FaML
AI4CE
LRM
53
651
0
20 Mar 2021
Debiasing Concept-based Explanations with Causal Analysis
Debiasing Concept-based Explanations with Causal Analysis
M. T. Bahadori
David Heckerman
FAtt
CML
6
38
0
22 Jul 2020
Adversarial examples in the physical world
Adversarial examples in the physical world
Alexey Kurakin
Ian Goodfellow
Samy Bengio
SILM
AAML
281
5,833
0
08 Jul 2016
1