ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2210.08151
  4. Cited By
ProtoVAE: A Trustworthy Self-Explainable Prototypical Variational Model

ProtoVAE: A Trustworthy Self-Explainable Prototypical Variational Model

Neural Information Processing Systems (NeurIPS), 2022
15 October 2022
Srishti Gautam
Ahcène Boubekki
Stine Hansen
Suaiba Amina Salahuddin
Robert Jenssen
Marina M.-C. Höhne
Michael C. Kampffmeyer
ArXiv (abs)PDFHTMLGithub (16★)

Papers citing "ProtoVAE: A Trustworthy Self-Explainable Prototypical Variational Model"

31 / 31 papers shown
CountXplain: Interpretable Cell Counting with Prototype-Based Density Map Estimation
CountXplain: Interpretable Cell Counting with Prototype-Based Density Map Estimation
Abdurahman Ali Mohammed
Wallapak Tavanapong
Catherine Fonder
Donald S. Sakaguchi
112
1
0
24 Nov 2025
Comprehensive Evaluation of Prototype Neural Networks
Comprehensive Evaluation of Prototype Neural Networks
Philipp Schlinge
Steffen Meinert
Martin Atzmueller
339
2
0
09 Jul 2025
Enclosing Prototypical Variational Autoencoder for Explainable Out-of-Distribution Detection
Enclosing Prototypical Variational Autoencoder for Explainable Out-of-Distribution DetectionInternational Conference on Computer Safety, Reliability, and Security (SAFECOMP), 2025
Conrad Orglmeister
Erik Bochinski
Volker Eiselein
Elvira Fleig
204
0
0
17 Jun 2025
Fixed Point Explainability
Fixed Point Explainability
Emanuele La Malfa
Jon Vadillo
Marco Molinari
Michael Wooldridge
506
0
0
18 May 2025
DocVXQA: Context-Aware Visual Explanations for Document Question Answering
DocVXQA: Context-Aware Visual Explanations for Document Question Answering
Mohamed Ali Souibgui
Changkyu Choi
Andrey Barsky
Kangsoo Jung
Ernest Valveny
Dimosthenis Karatzas
413
4
0
12 May 2025
Tell me why: Visual foundation models as self-explainable classifiers
Tell me why: Visual foundation models as self-explainable classifiers
Hugues Turbé
Mina Bjelogrlic
G. Mengaldo
Christian Lovis
541
3
0
26 Feb 2025
Self-Explaining Hypergraph Neural Networks for Diagnosis Prediction
Self-Explaining Hypergraph Neural Networks for Diagnosis PredictionACM Conference on Health, Inference, and Learning (CHIL), 2025
Leisheng Yu
Yanxiao Cai
Minxing Zhang
Helen Zhou
FAtt
1.2K
4
0
15 Feb 2025
Cross- and Intra-image Prototypical Learning for Multi-label Disease Diagnosis and Interpretation
Cross- and Intra-image Prototypical Learning for Multi-label Disease Diagnosis and InterpretationIEEE Transactions on Medical Imaging (IEEE TMI), 2024
Chong Wang
Fengbei Liu
Yuanhong Chen
Helen Frazer
Gustavo Carneiro
545
11
0
07 Nov 2024
Advancing Interpretability in Text Classification through Prototype
  Learning
Advancing Interpretability in Text Classification through Prototype Learning
Bowen Wei
Ziwei Zhu
272
0
0
23 Oct 2024
The Gaussian Discriminant Variational Autoencoder (GdVAE): A
  Self-Explainable Model with Counterfactual Explanations
The Gaussian Discriminant Variational Autoencoder (GdVAE): A Self-Explainable Model with Counterfactual ExplanationsEuropean Conference on Computer Vision (ECCV), 2024
Anselm Haselhoff
Kevin Trelenberg
Fabian Küppers
Jonas Schneider
290
7
0
19 Sep 2024
Signed Graph Autoencoder for Explainable and Polarization-Aware Network Embeddings
Signed Graph Autoencoder for Explainable and Polarization-Aware Network EmbeddingsInternational Conference on Artificial Intelligence and Statistics (AISTATS), 2024
Nikolaos Nakis
Chrysoula Kosma
Giannis Nikolentzos
Michalis Chatzianastasis
Iakovos Evdaimon
Michalis Vazirgiannis
543
4
0
16 Sep 2024
Multi-Scale Grouped Prototypes for Interpretable Semantic Segmentation
Multi-Scale Grouped Prototypes for Interpretable Semantic SegmentationIEEE Workshop/Winter Conference on Applications of Computer Vision (WACV), 2024
Hugo Porta
Emanuele Dalsasso
Diego Marcos
D. Tuia
633
1
0
14 Sep 2024
This Probably Looks Exactly Like That: An Invertible Prototypical
  Network
This Probably Looks Exactly Like That: An Invertible Prototypical Network
Zachariah Carmichael
Timothy Redgrave
Daniel Gonzalez Cedre
Walter J. Scheirer
BDL
363
6
0
16 Jul 2024
Restyling Unsupervised Concept Based Interpretable Networks with Generative Models
Restyling Unsupervised Concept Based Interpretable Networks with Generative Models
Jayneel Parekh
Quentin Bouniot
Pavlo Mozharovskyi
A. Newson
Florence dÁlché-Buc
SSL
616
2
0
01 Jul 2024
ProtoS-ViT: Visual foundation models for sparse self-explainable
  classifications
ProtoS-ViT: Visual foundation models for sparse self-explainable classifications
Hugues Turbé
Mina Bjelogrlic
G. Mengaldo
Christian Lovis
ViT
324
9
0
14 Jun 2024
DISCRET: Synthesizing Faithful Explanations For Treatment Effect
  Estimation
DISCRET: Synthesizing Faithful Explanations For Treatment Effect Estimation
Yinjun Wu
Mayank Keoliya
Kan Chen
Neelay Velingker
Ziyang Li
E. Getzen
Qi Long
Mayur Naik
Ravi B. Parikh
Eric Wong
324
3
0
02 Jun 2024
Uncertainty-Aware Explanations Through Probabilistic Self-Explainable Neural Networks
Uncertainty-Aware Explanations Through Probabilistic Self-Explainable Neural Networks
Jon Vadillo
Roberto Santana
J. A. Lozano
Marta Z. Kwiatkowska
AAMLBDL
621
1
0
20 Mar 2024
Pantypes: Diverse Representatives for Self-Explainable Models
Pantypes: Diverse Representatives for Self-Explainable ModelsAAAI Conference on Artificial Intelligence (AAAI), 2024
R. Kjærsgaard
Ahcène Boubekki
Line H. Clemmensen
222
6
0
14 Mar 2024
A Note on Bias to Complete
A Note on Bias to Complete
Jia Xu
Mona Diab
349
2
0
18 Feb 2024
Explaining Time Series via Contrastive and Locally Sparse Perturbations
Explaining Time Series via Contrastive and Locally Sparse PerturbationsInternational Conference on Learning Representations (ICLR), 2024
Zichuan Liu
Yingying Zhang
Tianchun Wang
Zefan Wang
Dongsheng Luo
...
Min Wu
Yi Wang
Chunlin Chen
Lunting Fan
Qingsong Wen
423
26
0
16 Jan 2024
Prototypical Self-Explainable Models Without Re-training
Prototypical Self-Explainable Models Without Re-training
Srishti Gautam
Ahcène Boubekki
Marina M.-C. Höhne
Michael C. Kampffmeyer
358
4
0
13 Dec 2023
Mixture of Gaussian-distributed Prototypes with Generative Modelling for Interpretable and Trustworthy Image Recognition
Mixture of Gaussian-distributed Prototypes with Generative Modelling for Interpretable and Trustworthy Image RecognitionIEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2023
Chong Wang
Yuanhong Chen
Fengbei Liu
Yuyuan Liu
Davis J. McCarthy
Helen Frazer
Gustavo Carneiro
634
7
0
30 Nov 2023
Human-Guided Complexity-Controlled Abstractions
Human-Guided Complexity-Controlled AbstractionsNeural Information Processing Systems (NeurIPS), 2023
Andi Peng
Mycal Tucker
Eoin M. Kenny
Noga Zaslavsky
Pulkit Agrawal
Julie A. Shah
222
0
0
26 Oct 2023
Pixel-Grounded Prototypical Part Networks
Pixel-Grounded Prototypical Part NetworksIEEE Workshop/Winter Conference on Applications of Computer Vision (WACV), 2023
Zachariah Carmichael
Suhas Lohit
A. Cherian
Michael Jeffrey Jones
Walter J. Scheirer
341
20
0
25 Sep 2023
Interpretability Benchmark for Evaluating Spatial Misalignment of
  Prototypical Parts Explanations
Interpretability Benchmark for Evaluating Spatial Misalignment of Prototypical Parts ExplanationsAAAI Conference on Artificial Intelligence (AAAI), 2023
Mikolaj Sacha
Bartosz Jura
Dawid Rymarczyk
Lukasz Struski
Jacek Tabor
Bartosz Zieliñski
244
27
0
16 Aug 2023
Interpretable Alzheimer's Disease Classification Via a Contrastive
  Diffusion Autoencoder
Interpretable Alzheimer's Disease Classification Via a Contrastive Diffusion Autoencoder
Ayodeji Ijishakin
A. Abdulaal
Adamos Hadjivasiliou
Sophie Martin
James H. Cole
DiffMMedIm
396
9
0
05 Jun 2023
Encoding Time-Series Explanations through Self-Supervised Model Behavior
  Consistency
Encoding Time-Series Explanations through Self-Supervised Model Behavior ConsistencyNeural Information Processing Systems (NeurIPS), 2023
Owen Queen
Thomas Hartvigsen
Teddy Koker
Huan He
Theodoros Tsiligkaridis
Marinka Zitnik
AI4TS
396
40
0
03 Jun 2023
Finding the right XAI method -- A Guide for the Evaluation and Ranking
  of Explainable AI Methods in Climate Science
Finding the right XAI method -- A Guide for the Evaluation and Ranking of Explainable AI Methods in Climate ScienceArtificial Intelligence for the Earth Systems (AI4ES), 2023
P. Bommer
M. Kretschmer
Anna Hedström
Dilyara Bareeva
Marina M.-C. Höhne
414
63
0
01 Mar 2023
DORA: Exploring Outlier Representations in Deep Neural Networks
DORA: Exploring Outlier Representations in Deep Neural Networks
Kirill Bykov
Mayukh Deb
Dennis Grinwald
Klaus-Robert Muller
Marina M.-C. Höhne
519
17
0
09 Jun 2022
When and How to Fool Explainable Models (and Humans) with Adversarial
  Examples
When and How to Fool Explainable Models (and Humans) with Adversarial Examples
Jon Vadillo
Roberto Santana
Jose A. Lozano
SILMAAML
358
27
0
05 Jul 2021
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAttFaML
2.7K
21,359
0
16 Feb 2016
1
Page 1 of 1