Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2112.04882
Cited By
Evaluating saliency methods on artificial data with different background types
9 December 2021
Céline Budding
Fabian Eitel
K. Ritter
Stefan Haufe
XAI
FAtt
MedIm
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Evaluating saliency methods on artificial data with different background types"
4 / 4 papers shown
Title
Explainable AI needs formal notions of explanation correctness
Stefan Haufe
Rick Wilming
Benedict Clark
Rustam Zhumagambetov
Danny Panknin
Ahcène Boubekki
XAI
28
0
0
22 Sep 2024
Decoupling Pixel Flipping and Occlusion Strategy for Consistent XAI Benchmarks
Stefan Blücher
Johanna Vielhaben
Nils Strodthoff
AAML
61
20
0
12 Jan 2024
Quantus: An Explainable AI Toolkit for Responsible Evaluation of Neural Network Explanations and Beyond
Anna Hedström
Leander Weber
Dilyara Bareeva
Daniel G. Krakowczyk
Franz Motzkus
Wojciech Samek
Sebastian Lapuschkin
Marina M.-C. Höhne
XAI
ELM
16
168
0
14 Feb 2022
Towards A Rigorous Science of Interpretable Machine Learning
Finale Doshi-Velez
Been Kim
XAI
FaML
242
3,681
0
28 Feb 2017
1