12
5

Confidence-Calibrated Adversarial Training: Generalizing to Unseen Attacks

David Stutz
Matthias Hein
Bernt Schiele
Abstract

Adversarial training yields robust models against a specific threat model, e.g., LL_\infty adversarial examples. Typically robustness does not generalize to previously unseen threat models, e.g., other LpL_p norms, or larger perturbations. Our confidence-calibrated adversarial training (CCAT) tackles this problem by biasing the model towards low confidence predictions on adversarial examples. By allowing to reject examples with low confidence, robustness generalizes beyond the threat model employed during training. CCAT, trained only on LL_\infty adversarial examples, increases robustness against larger LL_\infty, L2L_2, L1L_1 and L0L_0 attacks, adversarial frames, distal adversarial examples and corrupted examples and yields better clean accuracy compared to adversarial training. For thorough evaluation we developed novel white- and black-box attacks directly attacking CCAT by maximizing confidence. For each threat model, we use 77 attacks with up to 5050 restarts and 50005000 iterations and report worst-case robust test error, extended to our confidence-thresholded setting, across all attacks.

View on arXiv
Comments on this paper