168

Defensive Distillation is Not Robust to Adversarial Examples

Abstract

We show that defensive distillation is not secure: it is no more resistant to targeted misclassification attacks than unprotected neural networks.

View on arXiv
Comments on this paper