Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
0912.1128
Cited By
How to Explain Individual Classification Decisions
6 December 2009
D. Baehrens
T. Schroeter
Stefan Harmeling
M. Kawanabe
K. Hansen
K. Müller
FAtt
Re-assign community
ArXiv
PDF
HTML
Papers citing
"How to Explain Individual Classification Decisions"
10 / 10 papers shown
Title
Attention Mechanisms Don't Learn Additive Models: Rethinking Feature Importance for Transformers
Tobias Leemann
Alina Fastowski
Felix Pfeiffer
Gjergji Kasneci
102
5
0
10 Jan 2025
Going Beyond Conventional OOD Detection
Sudarshan Regmi
OODD
82
1
0
16 Nov 2024
Orbit: A Framework for Designing and Evaluating Multi-objective Rankers
Chenyang Yang
Tesi Xiao
Michael Shavlovsky
Christian Kastner
Tongshuang Wu
59
0
0
07 Nov 2024
Unlearning-based Neural Interpretations
Ching Lam Choi
Alexandre Duplessis
Serge Belongie
FAtt
133
0
0
10 Oct 2024
Restyling Unsupervised Concept Based Interpretable Networks with Generative Models
Jayneel Parekh
Quentin Bouniot
Pavlo Mozharovskyi
A. Newson
Florence dÁlché-Buc
SSL
94
1
0
01 Jul 2024
MambaLRP: Explaining Selective State Space Sequence Models
F. Jafari
G. Montavon
Klaus-Robert Müller
Oliver Eberle
Mamba
162
10
0
11 Jun 2024
Explaining Representation by Mutual Information
Li Gu
SSL
FAtt
55
0
0
28 Mar 2021
How Much Can I Trust You? -- Quantifying Uncertainties in Explaining Neural Networks
Kirill Bykov
Marina M.-C. Höhne
Klaus-Robert Muller
Shinichi Nakajima
Marius Kloft
UQCV
FAtt
62
31
0
16 Jun 2020
Explaining Deep Neural Networks and Beyond: A Review of Methods and Applications
Wojciech Samek
G. Montavon
Sebastian Lapuschkin
Christopher J. Anders
K. Müller
XAI
86
82
0
17 Mar 2020
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAtt
FaML
582
16,828
0
16 Feb 2016
1