ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2106.13200
  4. Cited By
Software for Dataset-wide XAI: From Local Explanations to Global
  Insights with Zennit, CoRelAy, and ViRelAy

Software for Dataset-wide XAI: From Local Explanations to Global Insights with Zennit, CoRelAy, and ViRelAy

24 June 2021
Christopher J. Anders
David Neumann
Wojciech Samek
K. Müller
Sebastian Lapuschkin
ArXivPDFHTML

Papers citing "Software for Dataset-wide XAI: From Local Explanations to Global Insights with Zennit, CoRelAy, and ViRelAy"

46 / 46 papers shown
Title
Pairwise Matching of Intermediate Representations for Fine-grained Explainability
Pairwise Matching of Intermediate Representations for Fine-grained Explainability
Lauren Shrack
T. Haucke
Antoine Salaün
Arjun Subramonian
Sara Beery
42
0
0
28 Mar 2025
Escaping Plato's Cave: Robust Conceptual Reasoning through Interpretable 3D Neural Object Volumes
Escaping Plato's Cave: Robust Conceptual Reasoning through Interpretable 3D Neural Object Volumes
Nhi Pham
Bernt Schiele
Adam Kortylewski
Jonas Fischer
56
0
0
17 Mar 2025
Post-Hoc Concept Disentanglement: From Correlated to Isolated Concept Representations
Eren Erogullari
Sebastian Lapuschkin
Wojciech Samek
Frederik Pahde
LLMSV
CoGe
59
0
0
07 Mar 2025
Investigating the Relationship Between Debiasing and Artifact Removal using Saliency Maps
Investigating the Relationship Between Debiasing and Artifact Removal using Saliency Maps
Lukasz Sztukiewicz
Ignacy Stepka
Michał Wiliński
Jerzy Stefanowski
31
0
0
28 Feb 2025
A Close Look at Decomposition-based XAI-Methods for Transformer Language Models
A Close Look at Decomposition-based XAI-Methods for Transformer Language Models
L. Arras
Bruno Puri
Patrick Kahardipraja
Sebastian Lapuschkin
Wojciech Samek
35
0
0
21 Feb 2025
BEExAI: Benchmark to Evaluate Explainable AI
BEExAI: Benchmark to Evaluate Explainable AI
Samuel Sithakoul
Sara Meftah
Clément Feutry
34
8
0
29 Jul 2024
Challenges in explaining deep learning models for data with biological
  variation
Challenges in explaining deep learning models for data with biological variation
Lenka Tětková
E. Dreier
Robin Malm
Lars Kai Hansen
AAML
33
1
0
14 Jun 2024
Locally Testing Model Detections for Semantic Global Concepts
Locally Testing Model Detections for Semantic Global Concepts
Franz Motzkus
Georgii Mikriukov
Christian Hellert
Ute Schmid
25
2
0
27 May 2024
Towards Natural Machine Unlearning
Towards Natural Machine Unlearning
Zhengbao He
Tao Li
Xinwen Cheng
Zhehao Huang
Xiaolin Huang
MU
28
3
0
24 May 2024
A Fresh Look at Sanity Checks for Saliency Maps
A Fresh Look at Sanity Checks for Saliency Maps
Anna Hedström
Leander Weber
Sebastian Lapuschkin
Marina M.-C. Höhne
FAtt
LRM
34
5
0
03 May 2024
Sparse Explanations of Neural Networks Using Pruned Layer-Wise Relevance
  Propagation
Sparse Explanations of Neural Networks Using Pruned Layer-Wise Relevance Propagation
Paulo Yanez Sarmiento
Simon Witzke
Nadja Klein
Bernhard Y. Renard
FAtt
AAML
38
0
0
22 Apr 2024
Reactive Model Correction: Mitigating Harm to Task-Relevant Features via
  Conditional Bias Suppression
Reactive Model Correction: Mitigating Harm to Task-Relevant Features via Conditional Bias Suppression
Dilyara Bareeva
Maximilian Dreyer
Frederik Pahde
Wojciech Samek
Sebastian Lapuschkin
KELM
67
1
0
15 Apr 2024
Interpreting End-to-End Deep Learning Models for Speech Source
  Localization Using Layer-wise Relevance Propagation
Interpreting End-to-End Deep Learning Models for Speech Source Localization Using Layer-wise Relevance Propagation
Luca Comanducci
Fabio Antonacci
Augusto Sarti
29
1
0
04 Apr 2024
AttnLRP: Attention-Aware Layer-Wise Relevance Propagation for
  Transformers
AttnLRP: Attention-Aware Layer-Wise Relevance Propagation for Transformers
Reduan Achtibat
Sayed Mohammad Vakilzadeh Hatefi
Maximilian Dreyer
Aakriti Jain
Thomas Wiegand
Sebastian Lapuschkin
Wojciech Samek
19
24
0
08 Feb 2024
Decoupling Pixel Flipping and Occlusion Strategy for Consistent XAI
  Benchmarks
Decoupling Pixel Flipping and Occlusion Strategy for Consistent XAI Benchmarks
Stefan Blücher
Johanna Vielhaben
Nils Strodthoff
AAML
61
20
0
12 Jan 2024
Sanity Checks Revisited: An Exploration to Repair the Model Parameter
  Randomisation Test
Sanity Checks Revisited: An Exploration to Repair the Model Parameter Randomisation Test
Anna Hedström
Leander Weber
Sebastian Lapuschkin
Marina M.-C. Höhne
LRM
19
3
0
12 Jan 2024
Prototypical Self-Explainable Models Without Re-training
Prototypical Self-Explainable Models Without Re-training
Srishti Gautam
Ahcène Boubekki
Marina M.-C. Höhne
Michael C. Kampffmeyer
26
2
0
13 Dec 2023
Human-Centered Evaluation of XAI Methods
Human-Centered Evaluation of XAI Methods
Karam Dawoud
Wojciech Samek
Peter Eisert
Sebastian Lapuschkin
Sebastian Bosse
48
4
0
11 Oct 2023
Towards Fixing Clever-Hans Predictors with Counterfactual Knowledge
  Distillation
Towards Fixing Clever-Hans Predictors with Counterfactual Knowledge Distillation
Sidney Bender
Christopher J. Anders
Pattarawat Chormai
Heike Marxfeld
J. Herrmann
G. Montavon
CML
14
1
0
02 Oct 2023
From Classification to Segmentation with Explainable AI: A Study on
  Crack Detection and Growth Monitoring
From Classification to Segmentation with Explainable AI: A Study on Crack Detection and Growth Monitoring
Florent Forest
Hugo Porta
D. Tuia
Olga Fink
11
7
0
20 Sep 2023
Interpreting Deep Neural Networks with the Package innsight
Interpreting Deep Neural Networks with the Package innsight
Niklas Koenen
Marvin N. Wright
FAtt
31
6
0
19 Jun 2023
Explaining Deep Learning for ECG Analysis: Building Blocks for Auditing
  and Knowledge Discovery
Explaining Deep Learning for ECG Analysis: Building Blocks for Auditing and Knowledge Discovery
Patrick Wagner
Temesgen Mehari
Wilhelm Haverkamp
Nils Strodthoff
32
18
0
26 May 2023
XAI-based Comparison of Input Representations for Audio Event
  Classification
XAI-based Comparison of Input Representations for Audio Event Classification
A. Frommholz
Fabian Seipel
Sebastian Lapuschkin
Wojciech Samek
Johanna Vielhaben
AAML
AI4TS
14
6
0
27 Apr 2023
Robustness of Visual Explanations to Common Data Augmentation
Robustness of Visual Explanations to Common Data Augmentation
Lenka Tětková
Lars Kai Hansen
AAML
24
6
0
18 Apr 2023
Bridging the Gap: Gaze Events as Interpretable Concepts to Explain Deep
  Neural Sequence Models
Bridging the Gap: Gaze Events as Interpretable Concepts to Explain Deep Neural Sequence Models
Daniel G. Krakowczyk
Paul Prasse
D. R. Reich
Sebastian Lapuschkin
Tobias Scheffer
Lena A. Jäger
33
2
0
12 Apr 2023
Preemptively Pruning Clever-Hans Strategies in Deep Neural Networks
Preemptively Pruning Clever-Hans Strategies in Deep Neural Networks
Lorenz Linhardt
Klaus-Robert Muller
G. Montavon
AAML
21
7
0
12 Apr 2023
Multi-Channel Time-Series Person and Soft-Biometric Identification
Multi-Channel Time-Series Person and Soft-Biometric Identification
Nilah Ravi Nair
Fernando Moya Rueda
Christopher Reining
Gernot A. Fink
19
3
0
04 Apr 2023
Better Understanding Differences in Attribution Methods via Systematic
  Evaluations
Better Understanding Differences in Attribution Methods via Systematic Evaluations
Sukrut Rao
Moritz D Boehle
Bernt Schiele
XAI
19
2
0
21 Mar 2023
Explainable AI for Time Series via Virtual Inspection Layers
Explainable AI for Time Series via Virtual Inspection Layers
Johanna Vielhaben
Sebastian Lapuschkin
G. Montavon
Wojciech Samek
XAI
AI4TS
10
25
0
11 Mar 2023
Disentangled Explanations of Neural Network Predictions by Finding
  Relevant Subspaces
Disentangled Explanations of Neural Network Predictions by Finding Relevant Subspaces
Pattarawat Chormai
J. Herrmann
Klaus-Robert Muller
G. Montavon
FAtt
43
17
0
30 Dec 2022
Optimizing Explanations by Network Canonization and Hyperparameter
  Search
Optimizing Explanations by Network Canonization and Hyperparameter Search
Frederik Pahde
Galip Umit Yolcu
Alexander Binder
Wojciech Samek
Sebastian Lapuschkin
44
11
0
30 Nov 2022
Towards More Robust Interpretation via Local Gradient Alignment
Towards More Robust Interpretation via Local Gradient Alignment
Sunghwan Joo
Seokhyeon Jeong
Juyeon Heo
Adrian Weller
Taesup Moon
FAtt
25
5
0
29 Nov 2022
Revealing Hidden Context Bias in Segmentation and Object Detection
  through Concept-specific Explanations
Revealing Hidden Context Bias in Segmentation and Object Detection through Concept-specific Explanations
Maximilian Dreyer
Reduan Achtibat
Thomas Wiegand
Wojciech Samek
Sebastian Lapuschkin
50
20
0
21 Nov 2022
Explanation-based Counterfactual Retraining(XCR): A Calibration Method
  for Black-box Models
Explanation-based Counterfactual Retraining(XCR): A Calibration Method for Black-box Models
Liu Zhendong
Wenyu Jiang
Yan Zhang
Chongjun Wang
CML
6
0
0
22 Jun 2022
From Attribution Maps to Human-Understandable Explanations through
  Concept Relevance Propagation
From Attribution Maps to Human-Understandable Explanations through Concept Relevance Propagation
Reduan Achtibat
Maximilian Dreyer
Ilona Eisenbraun
S. Bosse
Thomas Wiegand
Wojciech Samek
Sebastian Lapuschkin
FAtt
25
131
0
07 Jun 2022
Explain to Not Forget: Defending Against Catastrophic Forgetting with
  XAI
Explain to Not Forget: Defending Against Catastrophic Forgetting with XAI
Sami Ede
Serop Baghdadlian
Leander Weber
A. Nguyen
Dario Zanca
Wojciech Samek
Sebastian Lapuschkin
CLL
16
6
0
04 May 2022
Beyond Explaining: Opportunities and Challenges of XAI-Based Model
  Improvement
Beyond Explaining: Opportunities and Challenges of XAI-Based Model Improvement
Leander Weber
Sebastian Lapuschkin
Alexander Binder
Wojciech Samek
41
89
0
15 Mar 2022
Quantus: An Explainable AI Toolkit for Responsible Evaluation of Neural
  Network Explanations and Beyond
Quantus: An Explainable AI Toolkit for Responsible Evaluation of Neural Network Explanations and Beyond
Anna Hedström
Leander Weber
Dilyara Bareeva
Daniel G. Krakowczyk
Franz Motzkus
Wojciech Samek
Sebastian Lapuschkin
Marina M.-C. Höhne
XAI
ELM
16
168
0
14 Feb 2022
Measurably Stronger Explanation Reliability via Model Canonization
Measurably Stronger Explanation Reliability via Model Canonization
Franz Motzkus
Leander Weber
Sebastian Lapuschkin
FAtt
12
6
0
14 Feb 2022
Navigating Neural Space: Revisiting Concept Activation Vectors to Overcome Directional Divergence
Navigating Neural Space: Revisiting Concept Activation Vectors to Overcome Directional Divergence
Frederik Pahde
Maximilian Dreyer
Leander Weber
Moritz Weckbecker
Christopher J. Anders
Thomas Wiegand
Wojciech Samek
Sebastian Lapuschkin
55
7
0
07 Feb 2022
Toward Explainable AI for Regression Models
Toward Explainable AI for Regression Models
S. Letzgus
Patrick Wagner
Jonas Lederer
Wojciech Samek
Klaus-Robert Muller
G. Montavon
XAI
28
63
0
21 Dec 2021
ECQ$^{\text{x}}$: Explainability-Driven Quantization for Low-Bit and
  Sparse DNNs
ECQx^{\text{x}}x: Explainability-Driven Quantization for Low-Bit and Sparse DNNs
Daniel Becking
Maximilian Dreyer
Wojciech Samek
Karsten Müller
Sebastian Lapuschkin
MQ
75
13
0
09 Sep 2021
SpookyNet: Learning Force Fields with Electronic Degrees of Freedom and
  Nonlocal Effects
SpookyNet: Learning Force Fields with Electronic Degrees of Freedom and Nonlocal Effects
Oliver T. Unke
Stefan Chmiela
M. Gastegger
Kristof T. Schütt
H. E. Sauceda
K. Müller
142
244
0
01 May 2021
Choice modelling in the age of machine learning -- discussion paper
Choice modelling in the age of machine learning -- discussion paper
S. Cranenburgh
S. Wang
A. Vij
Francisco Câmara Pereira
J. Walker
20
87
0
28 Jan 2021
On the Explanation of Machine Learning Predictions in Clinical Gait
  Analysis
On the Explanation of Machine Learning Predictions in Clinical Gait Analysis
D. Slijepcevic
Fabian Horst
Sebastian Lapuschkin
Anna-Maria Raberger
Matthias Zeppelzauer
Wojciech Samek
C. Breiteneder
W. Schöllhorn
B. Horsak
14
50
0
16 Dec 2019
AudioMNIST: Exploring Explainable Artificial Intelligence for Audio
  Analysis on a Simple Benchmark
AudioMNIST: Exploring Explainable Artificial Intelligence for Audio Analysis on a Simple Benchmark
Sören Becker
Johanna Vielhaben
M. Ackermann
Klaus-Robert Muller
Sebastian Lapuschkin
Wojciech Samek
XAI
16
93
0
09 Jul 2018
1