Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2110.07305
Cited By
DI-AA: An Interpretable White-box Attack for Fooling Deep Neural Networks
14 October 2021
Yixiang Wang
Jiqiang Liu
Xiaolin Chang
Jianhua Wang
Ricardo J. Rodríguez
AAML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"DI-AA: An Interpretable White-box Attack for Fooling Deep Neural Networks"
3 / 3 papers shown
Title
Iterative Adversarial Attack on Image-guided Story Ending Generation
Youze Wang
Wenbo Hu
Richang Hong
32
3
0
16 May 2023
How to Robustify Black-Box ML Models? A Zeroth-Order Optimization Perspective
Yimeng Zhang
Yuguang Yao
Jinghan Jia
Jinfeng Yi
Min-Fong Hong
Shiyu Chang
Sijia Liu
AAML
18
33
0
27 Mar 2022
Adversarial examples in the physical world
Alexey Kurakin
Ian Goodfellow
Samy Bengio
SILM
AAML
287
5,837
0
08 Jul 2016
1