Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2106.13200
Cited By
Software for Dataset-wide XAI: From Local Explanations to Global Insights with Zennit, CoRelAy, and ViRelAy
24 June 2021
Christopher J. Anders
David Neumann
Wojciech Samek
K. Müller
Sebastian Lapuschkin
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Software for Dataset-wide XAI: From Local Explanations to Global Insights with Zennit, CoRelAy, and ViRelAy"
46 / 46 papers shown
Title
Pairwise Matching of Intermediate Representations for Fine-grained Explainability
Lauren Shrack
T. Haucke
Antoine Salaün
Arjun Subramonian
Sara Beery
47
0
0
28 Mar 2025
Escaping Plato's Cave: Robust Conceptual Reasoning through Interpretable 3D Neural Object Volumes
Nhi Pham
Bernt Schiele
Adam Kortylewski
Jonas Fischer
56
0
0
17 Mar 2025
Post-Hoc Concept Disentanglement: From Correlated to Isolated Concept Representations
Eren Erogullari
Sebastian Lapuschkin
Wojciech Samek
Frederik Pahde
LLMSV
CoGe
62
0
0
07 Mar 2025
Investigating the Relationship Between Debiasing and Artifact Removal using Saliency Maps
Lukasz Sztukiewicz
Ignacy Stepka
Michał Wiliński
Jerzy Stefanowski
31
0
0
28 Feb 2025
A Close Look at Decomposition-based XAI-Methods for Transformer Language Models
L. Arras
Bruno Puri
Patrick Kahardipraja
Sebastian Lapuschkin
Wojciech Samek
35
0
0
21 Feb 2025
BEExAI: Benchmark to Evaluate Explainable AI
Samuel Sithakoul
Sara Meftah
Clément Feutry
37
8
0
29 Jul 2024
Challenges in explaining deep learning models for data with biological variation
Lenka Tětková
E. Dreier
Robin Malm
Lars Kai Hansen
AAML
38
1
0
14 Jun 2024
Locally Testing Model Detections for Semantic Global Concepts
Franz Motzkus
Georgii Mikriukov
Christian Hellert
Ute Schmid
30
2
0
27 May 2024
Towards Natural Machine Unlearning
Zhengbao He
Tao Li
Xinwen Cheng
Zhehao Huang
Xiaolin Huang
MU
28
3
0
24 May 2024
A Fresh Look at Sanity Checks for Saliency Maps
Anna Hedström
Leander Weber
Sebastian Lapuschkin
Marina M.-C. Höhne
FAtt
LRM
37
5
0
03 May 2024
Sparse Explanations of Neural Networks Using Pruned Layer-Wise Relevance Propagation
Paulo Yanez Sarmiento
Simon Witzke
Nadja Klein
Bernhard Y. Renard
FAtt
AAML
38
0
0
22 Apr 2024
Reactive Model Correction: Mitigating Harm to Task-Relevant Features via Conditional Bias Suppression
Dilyara Bareeva
Maximilian Dreyer
Frederik Pahde
Wojciech Samek
Sebastian Lapuschkin
KELM
67
1
0
15 Apr 2024
Interpreting End-to-End Deep Learning Models for Speech Source Localization Using Layer-wise Relevance Propagation
Luca Comanducci
Fabio Antonacci
Augusto Sarti
31
1
0
04 Apr 2024
AttnLRP: Attention-Aware Layer-Wise Relevance Propagation for Transformers
Reduan Achtibat
Sayed Mohammad Vakilzadeh Hatefi
Maximilian Dreyer
Aakriti Jain
Thomas Wiegand
Sebastian Lapuschkin
Wojciech Samek
28
24
0
08 Feb 2024
Decoupling Pixel Flipping and Occlusion Strategy for Consistent XAI Benchmarks
Stefan Blücher
Johanna Vielhaben
Nils Strodthoff
AAML
61
20
0
12 Jan 2024
Sanity Checks Revisited: An Exploration to Repair the Model Parameter Randomisation Test
Anna Hedström
Leander Weber
Sebastian Lapuschkin
Marina M.-C. Höhne
LRM
24
3
0
12 Jan 2024
Prototypical Self-Explainable Models Without Re-training
Srishti Gautam
Ahcène Boubekki
Marina M.-C. Höhne
Michael C. Kampffmeyer
26
2
0
13 Dec 2023
Human-Centered Evaluation of XAI Methods
Karam Dawoud
Wojciech Samek
Peter Eisert
Sebastian Lapuschkin
Sebastian Bosse
50
4
0
11 Oct 2023
Towards Fixing Clever-Hans Predictors with Counterfactual Knowledge Distillation
Sidney Bender
Christopher J. Anders
Pattarawat Chormai
Heike Marxfeld
J. Herrmann
G. Montavon
CML
19
1
0
02 Oct 2023
From Classification to Segmentation with Explainable AI: A Study on Crack Detection and Growth Monitoring
Florent Forest
Hugo Porta
D. Tuia
Olga Fink
16
7
0
20 Sep 2023
Interpreting Deep Neural Networks with the Package innsight
Niklas Koenen
Marvin N. Wright
FAtt
36
6
0
19 Jun 2023
Explaining Deep Learning for ECG Analysis: Building Blocks for Auditing and Knowledge Discovery
Patrick Wagner
Temesgen Mehari
Wilhelm Haverkamp
Nils Strodthoff
32
18
0
26 May 2023
XAI-based Comparison of Input Representations for Audio Event Classification
A. Frommholz
Fabian Seipel
Sebastian Lapuschkin
Wojciech Samek
Johanna Vielhaben
AAML
AI4TS
16
6
0
27 Apr 2023
Robustness of Visual Explanations to Common Data Augmentation
Lenka Tětková
Lars Kai Hansen
AAML
24
6
0
18 Apr 2023
Bridging the Gap: Gaze Events as Interpretable Concepts to Explain Deep Neural Sequence Models
Daniel G. Krakowczyk
Paul Prasse
D. R. Reich
Sebastian Lapuschkin
Tobias Scheffer
Lena A. Jäger
33
2
0
12 Apr 2023
Preemptively Pruning Clever-Hans Strategies in Deep Neural Networks
Lorenz Linhardt
Klaus-Robert Muller
G. Montavon
AAML
21
7
0
12 Apr 2023
Multi-Channel Time-Series Person and Soft-Biometric Identification
Nilah Ravi Nair
Fernando Moya Rueda
Christopher Reining
Gernot A. Fink
21
3
0
04 Apr 2023
Better Understanding Differences in Attribution Methods via Systematic Evaluations
Sukrut Rao
Moritz D Boehle
Bernt Schiele
XAI
21
2
0
21 Mar 2023
Explainable AI for Time Series via Virtual Inspection Layers
Johanna Vielhaben
Sebastian Lapuschkin
G. Montavon
Wojciech Samek
XAI
AI4TS
10
25
0
11 Mar 2023
Disentangled Explanations of Neural Network Predictions by Finding Relevant Subspaces
Pattarawat Chormai
J. Herrmann
Klaus-Robert Muller
G. Montavon
FAtt
43
17
0
30 Dec 2022
Optimizing Explanations by Network Canonization and Hyperparameter Search
Frederik Pahde
Galip Umit Yolcu
Alexander Binder
Wojciech Samek
Sebastian Lapuschkin
44
11
0
30 Nov 2022
Towards More Robust Interpretation via Local Gradient Alignment
Sunghwan Joo
Seokhyeon Jeong
Juyeon Heo
Adrian Weller
Taesup Moon
FAtt
25
5
0
29 Nov 2022
Revealing Hidden Context Bias in Segmentation and Object Detection through Concept-specific Explanations
Maximilian Dreyer
Reduan Achtibat
Thomas Wiegand
Wojciech Samek
Sebastian Lapuschkin
50
20
0
21 Nov 2022
Explanation-based Counterfactual Retraining(XCR): A Calibration Method for Black-box Models
Liu Zhendong
Wenyu Jiang
Yan Zhang
Chongjun Wang
CML
6
0
0
22 Jun 2022
From Attribution Maps to Human-Understandable Explanations through Concept Relevance Propagation
Reduan Achtibat
Maximilian Dreyer
Ilona Eisenbraun
S. Bosse
Thomas Wiegand
Wojciech Samek
Sebastian Lapuschkin
FAtt
25
131
0
07 Jun 2022
Explain to Not Forget: Defending Against Catastrophic Forgetting with XAI
Sami Ede
Serop Baghdadlian
Leander Weber
A. Nguyen
Dario Zanca
Wojciech Samek
Sebastian Lapuschkin
CLL
21
6
0
04 May 2022
Beyond Explaining: Opportunities and Challenges of XAI-Based Model Improvement
Leander Weber
Sebastian Lapuschkin
Alexander Binder
Wojciech Samek
41
90
0
15 Mar 2022
Quantus: An Explainable AI Toolkit for Responsible Evaluation of Neural Network Explanations and Beyond
Anna Hedström
Leander Weber
Dilyara Bareeva
Daniel G. Krakowczyk
Franz Motzkus
Wojciech Samek
Sebastian Lapuschkin
Marina M.-C. Höhne
XAI
ELM
16
168
0
14 Feb 2022
Measurably Stronger Explanation Reliability via Model Canonization
Franz Motzkus
Leander Weber
Sebastian Lapuschkin
FAtt
12
6
0
14 Feb 2022
Navigating Neural Space: Revisiting Concept Activation Vectors to Overcome Directional Divergence
Frederik Pahde
Maximilian Dreyer
Leander Weber
Moritz Weckbecker
Christopher J. Anders
Thomas Wiegand
Wojciech Samek
Sebastian Lapuschkin
55
7
0
07 Feb 2022
Toward Explainable AI for Regression Models
S. Letzgus
Patrick Wagner
Jonas Lederer
Wojciech Samek
Klaus-Robert Muller
G. Montavon
XAI
28
63
0
21 Dec 2021
ECQ
x
^{\text{x}}
x
: Explainability-Driven Quantization for Low-Bit and Sparse DNNs
Daniel Becking
Maximilian Dreyer
Wojciech Samek
Karsten Müller
Sebastian Lapuschkin
MQ
87
13
0
09 Sep 2021
SpookyNet: Learning Force Fields with Electronic Degrees of Freedom and Nonlocal Effects
Oliver T. Unke
Stefan Chmiela
M. Gastegger
Kristof T. Schütt
H. E. Sauceda
K. Müller
142
244
0
01 May 2021
Choice modelling in the age of machine learning -- discussion paper
S. Cranenburgh
S. Wang
A. Vij
Francisco Câmara Pereira
J. Walker
22
87
0
28 Jan 2021
On the Explanation of Machine Learning Predictions in Clinical Gait Analysis
D. Slijepcevic
Fabian Horst
Sebastian Lapuschkin
Anna-Maria Raberger
Matthias Zeppelzauer
Wojciech Samek
C. Breiteneder
W. Schöllhorn
B. Horsak
14
50
0
16 Dec 2019
AudioMNIST: Exploring Explainable Artificial Intelligence for Audio Analysis on a Simple Benchmark
Sören Becker
Johanna Vielhaben
M. Ackermann
Klaus-Robert Muller
Sebastian Lapuschkin
Wojciech Samek
XAI
16
94
0
09 Jul 2018
1