ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1806.07538
  4. Cited By
Towards Robust Interpretability with Self-Explaining Neural Networks

Towards Robust Interpretability with Self-Explaining Neural Networks

20 June 2018
David Alvarez-Melis
Tommi Jaakkola
    MILM
    XAI
ArXivPDFHTML

Papers citing "Towards Robust Interpretability with Self-Explaining Neural Networks"

50 / 507 papers shown
Title
DocVXQA: Context-Aware Visual Explanations for Document Question Answering
DocVXQA: Context-Aware Visual Explanations for Document Question Answering
Mohamed Ali Souibgui
Changkyu Choi
Andrey Barsky
Kangsoo Jung
Ernest Valveny
Dimosthenis Karatzas
25
0
0
12 May 2025
Prediction via Shapley Value Regression
Prediction via Shapley Value Regression
Amr Alkhatib
Roman Bresson
Henrik Bostrom
Michalis Vazirgiannis
TDI
FAtt
64
0
0
07 May 2025
PointExplainer: Towards Transparent Parkinson's Disease Diagnosis
PointExplainer: Towards Transparent Parkinson's Disease Diagnosis
Xuechao Wang
S. Nõmm
Junqing Huang
Kadri Medijainen
A. Toomela
Michael Ruzhansky
AAML
FAtt
26
0
0
04 May 2025
If Concept Bottlenecks are the Question, are Foundation Models the Answer?
If Concept Bottlenecks are the Question, are Foundation Models the Answer?
Nicola Debole
Pietro Barbiero
Francesco Giannini
Andrea Passerini
Stefano Teso
Emanuele Marconato
131
0
0
28 Apr 2025
AI Awareness
AI Awareness
X. Li
Haoyuan Shi
Rongwu Xu
Wei Xu
54
0
0
25 Apr 2025
Avoiding Leakage Poisoning: Concept Interventions Under Distribution Shifts
Avoiding Leakage Poisoning: Concept Interventions Under Distribution Shifts
M. Zarlenga
Gabriele Dominici
Pietro Barbiero
Z. Shams
M. Jamnik
KELM
158
0
0
24 Apr 2025
What Makes for a Good Saliency Map? Comparing Strategies for Evaluating Saliency Maps in Explainable AI (XAI)
What Makes for a Good Saliency Map? Comparing Strategies for Evaluating Saliency Maps in Explainable AI (XAI)
Felix Kares
Timo Speith
Hanwei Zhang
Markus Langer
FAtt
XAI
38
0
0
23 Apr 2025
Leveraging multimodal explanatory annotations for video interpretation with Modality Specific Dataset
Leveraging multimodal explanatory annotations for video interpretation with Modality Specific Dataset
Elisa Ancarani
Julie Tores
L. Sassatelli
Rémy Sun
Hui-Yin Wu
F. Precioso
29
0
0
15 Apr 2025
ProtoECGNet: Case-Based Interpretable Deep Learning for Multi-Label ECG Classification with Contrastive Learning
ProtoECGNet: Case-Based Interpretable Deep Learning for Multi-Label ECG Classification with Contrastive Learning
S.
David Chen
Thomas Statchen
Michael C. Burkhart
Nipun Bhandari
Bashar Ramadan
Brett Beaulieu-Jones
37
1
0
11 Apr 2025
Towards an Evaluation Framework for Explainable Artificial Intelligence Systems for Health and Well-being
Towards an Evaluation Framework for Explainable Artificial Intelligence Systems for Health and Well-being
Esperança Amengual-Alcover
Antoni Jaume-i-Capó
Miquel Miró-Nicolau
Gabriel Moyà Alcover
Antonia Paniza-Fullana
32
0
0
11 Apr 2025
A constraints-based approach to fully interpretable neural networks for detecting learner behaviors
A constraints-based approach to fully interpretable neural networks for detecting learner behaviors
Juan D. Pinto
Luc Paquette
43
0
0
10 Apr 2025
V-CEM: Bridging Performance and Intervenability in Concept-based Models
V-CEM: Bridging Performance and Intervenability in Concept-based Models
Francesco De Santis
Gabriele Ciravegna
Philippe Bich
Danilo Giordano
Tania Cerquitelli
32
0
0
04 Apr 2025
Interpretable Machine Learning in Physics: A Review
Interpretable Machine Learning in Physics: A Review
Sebastian Johann Wetzel
Seungwoong Ha
Raban Iten
Miriam Klopotek
Ziming Liu
AI4CE
80
0
0
30 Mar 2025
Language Guided Concept Bottleneck Models for Interpretable Continual Learning
Language Guided Concept Bottleneck Models for Interpretable Continual Learning
Lu Yu
Haoyu Han
Zhe Tao
Hantao Yao
Changsheng Xu
CLL
60
0
0
30 Mar 2025
Self-Explaining Neural Networks for Business Process Monitoring
Self-Explaining Neural Networks for Business Process Monitoring
Shahaf Bassan
Shlomit Gur
Sergey Zeltyn
Konstantinos Mavrogiorgos
Ron Eliav
Dimosthenis Kyriazis
49
0
0
23 Mar 2025
Escaping Plato's Cave: Robust Conceptual Reasoning through Interpretable 3D Neural Object Volumes
Escaping Plato's Cave: Robust Conceptual Reasoning through Interpretable 3D Neural Object Volumes
Nhi Pham
Bernt Schiele
Adam Kortylewski
Jonas Fischer
61
0
0
17 Mar 2025
HyConEx: Hypernetwork classifier with counterfactual explanations
HyConEx: Hypernetwork classifier with counterfactual explanations
Patryk Marszałek
Ulvi Movsum-zada
Oleksii Furman
Kamil Ksiazek
P. Spurek
Marek Śmieja
58
0
0
16 Mar 2025
Causally Reliable Concept Bottleneck Models
Giovanni De Felice
Arianna Casanova Flores
Francesco De Santis
Silvia Santini
Johannes Schneider
Pietro Barbiero
Alberto Termine
74
2
0
06 Mar 2025
ILLC: Iterative Layer-by-Layer Compression for Enhancing Structural Faithfulness in SpArX
Ungsik Kim
51
0
0
05 Mar 2025
Controlled Model Debiasing through Minimal and Interpretable Updates
Controlled Model Debiasing through Minimal and Interpretable Updates
Federico Di Gennaro
Thibault Laugel
Vincent Grari
Marcin Detyniecki
FaML
54
0
0
28 Feb 2025
QPM: Discrete Optimization for Globally Interpretable Image Classification
QPM: Discrete Optimization for Globally Interpretable Image Classification
Thomas Norrenbrock
T. Kaiser
Sovan Biswas
R. Manuvinakurike
Bodo Rosenhahn
55
0
0
27 Feb 2025
Evaluate with the Inverse: Efficient Approximation of Latent Explanation Quality Distribution
Evaluate with the Inverse: Efficient Approximation of Latent Explanation Quality Distribution
Carlos Eiras-Franco
Anna Hedström
Marina M.-C. Höhne
XAI
38
0
0
24 Feb 2025
Uncertainty-Aware Explanations Through Probabilistic Self-Explainable Neural Networks
Uncertainty-Aware Explanations Through Probabilistic Self-Explainable Neural Networks
Jon Vadillo
Roberto Santana
J. A. Lozano
Marta Z. Kwiatkowska
BDL
AAML
65
0
0
17 Feb 2025
Shortcuts and Identifiability in Concept-based Models from a Neuro-Symbolic Lens
Shortcuts and Identifiability in Concept-based Models from a Neuro-Symbolic Lens
Samuele Bortolotti
Emanuele Marconato
Paolo Morettin
Andrea Passerini
Stefano Teso
61
2
0
16 Feb 2025
Self-Explaining Hypergraph Neural Networks for Diagnosis Prediction
Self-Explaining Hypergraph Neural Networks for Diagnosis Prediction
Leisheng Yu
Yanxiao Cai
Minxing Zhang
Xia Hu
FAtt
144
0
0
15 Feb 2025
Sample-efficient Learning of Concepts with Theoretical Guarantees: from Data to Concepts without Interventions
H. Fokkema
T. Erven
Sara Magliacane
70
1
0
10 Feb 2025
VLG-CBM: Training Concept Bottleneck Models with Vision-Language Guidance
VLG-CBM: Training Concept Bottleneck Models with Vision-Language Guidance
Divyansh Srivastava
Beatriz Cabrero-Daniel
Christian Berger
VLM
62
8
0
17 Jan 2025
Explaining the Behavior of Black-Box Prediction Algorithms with Causal Learning
Explaining the Behavior of Black-Box Prediction Algorithms with Causal Learning
Numair Sani
Daniel Malinsky
I. Shpitser
CML
76
15
0
10 Jan 2025
Energy-Based Concept Bottleneck Models: Unifying Prediction, Concept Intervention, and Probabilistic Interpretations
Energy-Based Concept Bottleneck Models: Unifying Prediction, Concept Intervention, and Probabilistic Interpretations
Xin-Chao Xu
Yi Qin
Lu Mi
Hao Wang
X. Li
74
9
0
03 Jan 2025
Navigating the Maze of Explainable AI: A Systematic Approach to Evaluating Methods and Metrics
Navigating the Maze of Explainable AI: A Systematic Approach to Evaluating Methods and Metrics
Lukas Klein
Carsten T. Lüth
U. Schlegel
Till J. Bungert
Mennatallah El-Assady
Paul F. Jäger
XAI
ELM
42
2
0
03 Jan 2025
Concept Learning in the Wild: Towards Algorithmic Understanding of
  Neural Networks
Concept Learning in the Wild: Towards Algorithmic Understanding of Neural Networks
Elad Shohama
Hadar Cohena
Khalil Wattada
Havana Rikab
Dan Vilenchik
70
1
0
15 Dec 2024
Advancing Attribution-Based Neural Network Explainability through
  Relative Absolute Magnitude Layer-Wise Relevance Propagation and
  Multi-Component Evaluation
Advancing Attribution-Based Neural Network Explainability through Relative Absolute Magnitude Layer-Wise Relevance Propagation and Multi-Component Evaluation
Davor Vukadin
Petar Afrić
Marin Šilić
Goran Delač
FAtt
93
2
0
12 Dec 2024
From Flexibility to Manipulation: The Slippery Slope of XAI Evaluation
From Flexibility to Manipulation: The Slippery Slope of XAI Evaluation
Kristoffer Wickstrøm
Marina M.-C. Höhne
Anna Hedström
AAML
79
2
0
07 Dec 2024
OMENN: One Matrix to Explain Neural Networks
OMENN: One Matrix to Explain Neural Networks
Adam Wróbel
Mikołaj Janusz
Bartosz Zieliñski
Dawid Rymarczyk
FAtt
AAML
75
0
0
03 Dec 2024
Explaining the Impact of Training on Vision Models via Activation Clustering
Explaining the Impact of Training on Vision Models via Activation Clustering
Ahcène Boubekki
Samuel G. Fadel
Sebastian Mair
89
0
0
29 Nov 2024
Establishing and Evaluating Trustworthy AI: Overview and Research
  Challenges
Establishing and Evaluating Trustworthy AI: Overview and Research Challenges
Dominik Kowald
S. Scher
Viktoria Pammer-Schindler
Peter Müllner
Kerstin Waxnegger
...
Andreas Truegler
Eduardo E. Veas
Roman Kern
Tomislav Nad
Simone Kopeinik
36
3
0
15 Nov 2024
Benchmarking XAI Explanations with Human-Aligned Evaluations
Benchmarking XAI Explanations with Human-Aligned Evaluations
Rémi Kazmierczak
Steve Azzolin
Eloise Berthier
Anna Hedström
Patricia Delhomme
...
Goran Frehse
Massimiliano Mancini
Baptiste Caramiaux
Andrea Passerini
Gianni Franchi
23
1
0
04 Nov 2024
ParseCaps: An Interpretable Parsing Capsule Network for Medical Image
  Diagnosis
ParseCaps: An Interpretable Parsing Capsule Network for Medical Image Diagnosis
Xinyu Geng
Jiaming Wang
Jun Xu
MedIm
29
0
0
03 Nov 2024
Learning local discrete features in explainable-by-design convolutional
  neural networks
Learning local discrete features in explainable-by-design convolutional neural networks
Pantelis I. Kaplanoglou
Konstantinos Diamantaras
FAtt
49
1
0
31 Oct 2024
Directly Optimizing Explanations for Desired Properties
Directly Optimizing Explanations for Desired Properties
Hiwot Belay Tadesse
Alihan Hüyük
Weiwei Pan
Finale Doshi-Velez
FAtt
43
0
0
31 Oct 2024
Gnothi Seauton: Empowering Faithful Self-Interpretability in Black-Box Transformers
Gnothi Seauton: Empowering Faithful Self-Interpretability in Black-Box Transformers
Shaobo Wang
Hongxuan Tang
Mingyang Wang
H. Zhang
Xuyang Liu
Weiya Li
Xuming Hu
Linfeng Zhang
17
0
0
29 Oct 2024
Rethinking the Principle of Gradient Smooth Methods in Model Explanation
Rethinking the Principle of Gradient Smooth Methods in Model Explanation
Linjiang Zhou
Chao Ma
Zepeng Wang
Xiaochuan Shi
FAtt
24
0
0
10 Oct 2024
Unlearning-based Neural Interpretations
Unlearning-based Neural Interpretations
Ching Lam Choi
Alexandre Duplessis
Serge Belongie
FAtt
42
0
0
10 Oct 2024
Self-eXplainable AI for Medical Image Analysis: A Survey and New
  Outlooks
Self-eXplainable AI for Medical Image Analysis: A Survey and New Outlooks
Junlin Hou
Sicen Liu
Yequan Bie
Hongmei Wang
Andong Tan
Luyang Luo
Hao Chen
XAI
25
3
0
03 Oct 2024
Concept-Based Explanations in Computer Vision: Where Are We and Where
  Could We Go?
Concept-Based Explanations in Computer Vision: Where Are We and Where Could We Go?
Jae Hee Lee
Georgii Mikriukov
Gesina Schwalbe
Stefan Wermter
D. Wolter
52
2
0
20 Sep 2024
The Gaussian Discriminant Variational Autoencoder (GdVAE): A
  Self-Explainable Model with Counterfactual Explanations
The Gaussian Discriminant Variational Autoencoder (GdVAE): A Self-Explainable Model with Counterfactual Explanations
Anselm Haselhoff
Kevin Trelenberg
Fabian Küppers
Jonas Schneider
29
1
0
19 Sep 2024
InfoDisent: Explainability of Image Classification Models by Information Disentanglement
InfoDisent: Explainability of Image Classification Models by Information Disentanglement
Łukasz Struski
Dawid Rymarczyk
Jacek Tabor
59
0
0
16 Sep 2024
MulCPred: Learning Multi-modal Concepts for Explainable Pedestrian
  Action Prediction
MulCPred: Learning Multi-modal Concepts for Explainable Pedestrian Action Prediction
Yan Feng
Alexander Carballo
Keisuke Fujii
Robin Karlsson
Ming Ding
K. Takeda
31
0
0
14 Sep 2024
An Evaluation of Explanation Methods for Black-Box Detectors of
  Machine-Generated Text
An Evaluation of Explanation Methods for Black-Box Detectors of Machine-Generated Text
Loris Schoenegger
Yuxi Xia
Benjamin Roth
FAtt
40
0
0
26 Aug 2024
Evaluating Explainable AI Methods in Deep Learning Models for Early
  Detection of Cerebral Palsy
Evaluating Explainable AI Methods in Deep Learning Models for Early Detection of Cerebral Palsy
Kimji N. Pellano
Inga Strümke
Daniel Groos
Lars Adde
Espen Alexander F. Ihlen
18
2
0
14 Aug 2024
1234...91011
Next