ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2111.04927
  4. Cited By
Self-Interpretable Model with TransformationEquivariant Interpretation

Self-Interpretable Model with TransformationEquivariant Interpretation

9 November 2021
Yipei Wang
Xiaoqian Wang
ArXivPDFHTML

Papers citing "Self-Interpretable Model with TransformationEquivariant Interpretation"

20 / 20 papers shown
Title
Self-Explaining Neural Networks for Business Process Monitoring
Self-Explaining Neural Networks for Business Process Monitoring
Shahaf Bassan
Shlomit Gur
Sergey Zeltyn
Konstantinos Mavrogiorgos
Ron Eliav
Dimosthenis Kyriazis
44
0
0
23 Mar 2025
Self-Explaining Hypergraph Neural Networks for Diagnosis Prediction
Self-Explaining Hypergraph Neural Networks for Diagnosis Prediction
Leisheng Yu
Yanxiao Cai
Minxing Zhang
Xia Hu
FAtt
52
0
0
15 Feb 2025
One Wave to Explain Them All: A Unifying Perspective on Post-hoc
  Explainability
One Wave to Explain Them All: A Unifying Perspective on Post-hoc Explainability
Gabriel Kasmi
Amandine Brunetto
Thomas Fel
Jayneel Parekh
AAML
FAtt
17
0
0
02 Oct 2024
The Gaussian Discriminant Variational Autoencoder (GdVAE): A
  Self-Explainable Model with Counterfactual Explanations
The Gaussian Discriminant Variational Autoencoder (GdVAE): A Self-Explainable Model with Counterfactual Explanations
Anselm Haselhoff
Kevin Trelenberg
Fabian Küppers
Jonas Schneider
11
1
0
19 Sep 2024
META-ANOVA: Screening interactions for interpretable machine learning
META-ANOVA: Screening interactions for interpretable machine learning
Daniel A. Serino
Marc L. Klasky
Chanmoo Park
Dongha Kim
Yongdai Kim
23
0
0
02 Aug 2024
Towards White Box Deep Learning
Towards White Box Deep Learning
Maciej Satkiewicz
AAML
19
0
0
14 Mar 2024
Learning the irreversible progression trajectory of Alzheimer's disease
Learning the irreversible progression trajectory of Alzheimer's disease
Yipei Wang
Bing He
S. Risacher
A. Saykin
Jingwen Yan
Xiaoqian Wang
30
0
0
10 Mar 2024
Path Choice Matters for Clear Attribution in Path Methods
Path Choice Matters for Clear Attribution in Path Methods
Borui Zhang
Wenzhao Zheng
Jie Zhou
Jiwen Lu
8
1
0
19 Jan 2024
Prototypical Self-Explainable Models Without Re-training
Prototypical Self-Explainable Models Without Re-training
Srishti Gautam
Ahcène Boubekki
Marina M.-C. Höhne
Michael C. Kampffmeyer
15
2
0
13 Dec 2023
Towards Faithful Neural Network Intrinsic Interpretation with Shapley
  Additive Self-Attribution
Towards Faithful Neural Network Intrinsic Interpretation with Shapley Additive Self-Attribution
Ying Sun
Hengshu Zhu
Huixia Xiong
TDI
FAtt
MILM
14
1
0
27 Sep 2023
Improving Prototypical Visual Explanations with Reward Reweighing,
  Reselection, and Retraining
Improving Prototypical Visual Explanations with Reward Reweighing, Reselection, and Retraining
Aaron J. Li
Robin Netzorg
Zhihan Cheng
Zhuoqin Zhang
Bin Yu
14
3
0
08 Jul 2023
Robustness of Visual Explanations to Common Data Augmentation
Robustness of Visual Explanations to Common Data Augmentation
Lenka Tětková
Lars Kai Hansen
AAML
9
6
0
18 Apr 2023
Evaluating the Robustness of Interpretability Methods through
  Explanation Invariance and Equivariance
Evaluating the Robustness of Interpretability Methods through Explanation Invariance and Equivariance
Jonathan Crabbé
M. Schaar
AAML
6
6
0
13 Apr 2023
A Test Statistic Estimation-based Approach for Establishing
  Self-interpretable CNN-based Binary Classifiers
A Test Statistic Estimation-based Approach for Establishing Self-interpretable CNN-based Binary Classifiers
S. Sengupta
M. Anastasio
MedIm
17
6
0
13 Mar 2023
Bort: Towards Explainable Neural Networks with Bounded Orthogonal
  Constraint
Bort: Towards Explainable Neural Networks with Bounded Orthogonal Constraint
Borui Zhang
Wenzhao Zheng
Jie Zhou
Jiwen Lu
AAML
23
7
0
18 Dec 2022
ProtoVAE: A Trustworthy Self-Explainable Prototypical Variational Model
ProtoVAE: A Trustworthy Self-Explainable Prototypical Variational Model
Srishti Gautam
Ahcène Boubekki
Stine Hansen
Suaiba Amina Salahuddin
Robert Jenssen
Marina M.-C. Höhne
Michael C. Kampffmeyer
12
34
0
15 Oct 2022
eX-ViT: A Novel eXplainable Vision Transformer for Weakly Supervised
  Semantic Segmentation
eX-ViT: A Novel eXplainable Vision Transformer for Weakly Supervised Semantic Segmentation
Lu Yu
Wei Xiang
Juan Fang
Yi-Ping Phoebe Chen
Lianhua Chi
ViT
14
24
0
12 Jul 2022
A Unified Study of Machine Learning Explanation Evaluation Metrics
A Unified Study of Machine Learning Explanation Evaluation Metrics
Yipei Wang
Xiaoqian Wang
XAI
6
7
0
27 Mar 2022
Zero-Shot Text-to-Image Generation
Zero-Shot Text-to-Image Generation
Aditya A. Ramesh
Mikhail Pavlov
Gabriel Goh
Scott Gray
Chelsea Voss
Alec Radford
Mark Chen
Ilya Sutskever
VLM
253
4,735
0
24 Feb 2021
Towards A Rigorous Science of Interpretable Machine Learning
Towards A Rigorous Science of Interpretable Machine Learning
Finale Doshi-Velez
Been Kim
XAI
FaML
222
3,658
0
28 Feb 2017
1