Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2301.11324
Cited By
Certified Interpretability Robustness for Class Activation Mapping
26 January 2023
Alex Gu
Tsui-Wei Weng
Pin-Yu Chen
Sijia Liu
Lucani E. Daniel
AAML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Certified Interpretability Robustness for Class Activation Mapping"
3 / 3 papers shown
Title
Adversarial attacks and defenses in explainable artificial intelligence: A survey
Hubert Baniecki
P. Biecek
AAML
42
63
0
06 Jun 2023
CNN-Cert: An Efficient Framework for Certifying Robustness of Convolutional Neural Networks
Akhilan Boopathy
Tsui-Wei Weng
Pin-Yu Chen
Sijia Liu
Luca Daniel
AAML
108
138
0
29 Nov 2018
Adversarial examples in the physical world
Alexey Kurakin
Ian Goodfellow
Samy Bengio
SILM
AAML
284
5,835
0
08 Jul 2016
1