ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2105.02968
  4. Cited By
This Looks Like That... Does it? Shortcomings of Latent Space Prototype
  Interpretability in Deep Networks

This Looks Like That... Does it? Shortcomings of Latent Space Prototype Interpretability in Deep Networks

5 May 2021
Adrian Hoffmann
Claudio Fanconi
Rahul Rade
Jonas Köhler
ArXivPDFHTML

Papers citing "This Looks Like That... Does it? Shortcomings of Latent Space Prototype Interpretability in Deep Networks"

9 / 9 papers shown
Title
MERA: Multimodal and Multiscale Self-Explanatory Model with Considerably Reduced Annotation for Lung Nodule Diagnosis
MERA: Multimodal and Multiscale Self-Explanatory Model with Considerably Reduced Annotation for Lung Nodule Diagnosis
Jiahao Lu
Chong Yin
Silvia Ingala
Kenny Erleben
M. Nielsen
S. Darkner
51
0
0
27 Apr 2025
A Robust Prototype-Based Network with Interpretable RBF Classifier Foundations
A Robust Prototype-Based Network with Interpretable RBF Classifier Foundations
S. Saralajew
Ashish Rana
T. Villmann
Ammar Shaker
OOD
87
1
0
20 Dec 2024
ProtoArgNet: Interpretable Image Classification with Super-Prototypes
  and Argumentation [Technical Report]
ProtoArgNet: Interpretable Image Classification with Super-Prototypes and Argumentation [Technical Report]
Hamed Ayoobi
Nico Potyka
Francesca Toni
36
2
0
26 Nov 2023
Take 5: Interpretable Image Classification with a Handful of Features
Take 5: Interpretable Image Classification with a Handful of Features
Thomas Norrenbrock
Marco Rudolph
Bodo Rosenhahn
FAtt
40
7
0
23 Mar 2023
ICICLE: Interpretable Class Incremental Continual Learning
ICICLE: Interpretable Class Incremental Continual Learning
Dawid Rymarczyk
Joost van de Weijer
Bartosz Zieliñski
Bartlomiej Twardowski
CLL
32
28
0
14 Mar 2023
GlanceNets: Interpretabile, Leak-proof Concept-based Models
GlanceNets: Interpretabile, Leak-proof Concept-based Models
Emanuele Marconato
Andrea Passerini
Stefano Teso
106
64
0
31 May 2022
HIVE: Evaluating the Human Interpretability of Visual Explanations
HIVE: Evaluating the Human Interpretability of Visual Explanations
Sunnie S. Y. Kim
Nicole Meister
V. V. Ramaswamy
Ruth C. Fong
Olga Russakovsky
66
114
0
06 Dec 2021
Transparency of Deep Neural Networks for Medical Image Analysis: A
  Review of Interpretability Methods
Transparency of Deep Neural Networks for Medical Image Analysis: A Review of Interpretability Methods
Zohaib Salahuddin
Henry C. Woodruff
A. Chatterjee
Philippe Lambin
18
302
0
01 Nov 2021
Toward a Unified Framework for Debugging Concept-based Models
Toward a Unified Framework for Debugging Concept-based Models
A. Bontempelli
Fausto Giunchiglia
Andrea Passerini
Stefano Teso
20
4
0
23 Sep 2021
1