Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2103.11257
Cited By
Robust Models Are More Interpretable Because Attributions Look Normal
20 March 2021
Zifan Wang
Matt Fredrikson
Anupam Datta
OOD
FAtt
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Robust Models Are More Interpretable Because Attributions Look Normal"
5 / 5 papers shown
Title
SAIF: Sparse Adversarial and Imperceptible Attack Framework
Tooba Imtiaz
Morgan Kohler
Jared Miller
Zifeng Wang
M. Sznaier
Octavia Camps
Octavia Camps
Jennifer Dy
AAML
13
0
0
14 Dec 2022
On the Relationship Between Explanation and Prediction: A Causal View
Amir-Hossein Karimi
Krikamol Muandet
Simon Kornblith
Bernhard Schölkopf
Been Kim
FAtt
CML
16
14
0
13 Dec 2022
What Makes a Good Explanation?: A Harmonized View of Properties of Explanations
Zixi Chen
Varshini Subhash
Marton Havasi
Weiwei Pan
Finale Doshi-Velez
XAI
FAtt
14
17
0
10 Nov 2022
On the Robustness of Explanations of Deep Neural Network Models: A Survey
Amlan Jyoti
Karthik Balaji Ganesh
Manoj Gayala
Nandita Lakshmi Tunuguntla
Sandesh Kamath
V. Balasubramanian
XAI
FAtt
AAML
19
4
0
09 Nov 2022
Globally-Robust Neural Networks
Klas Leino
Zifan Wang
Matt Fredrikson
AAML
OOD
80
125
0
16 Feb 2021
1