50
1

Towards Universal Certified Robustness with Multi-Norm Training

Abstract

Existing certified training methods can only train models to be robust against a certain perturbation type (e.g. ll_\infty or l2l_2). However, an ll_\infty certifiably robust model may not be certifiably robust against l2l_2 perturbation (and vice versa) and also has low robustness against other perturbations (e.g. geometric and patch transformation). By constructing a theoretical framework to analyze and mitigate the tradeoff, we propose the first multi-norm certified training framework \textbf{CURE}, consisting of several multi-norm certified training methods, to attain better \emph{union robustness} when training from scratch or fine-tuning a pre-trained certified model. Inspired by our theoretical findings, we devise bound alignment and connect natural training with certified training for better union robustness. Compared with SOTA-certified training, \textbf{CURE} improves union robustness to 32.0%32.0\% on MNIST, 25.8%25.8\% on CIFAR-10, and 10.6%10.6\% on TinyImagenet across different epsilon values. It leads to better generalization on a diverse set of challenging unseen geometric and patch perturbations to 6.8%6.8\% and 16.0%16.0\% on CIFAR-10. Overall, our contributions pave a path towards \textit{universal certified robustness}.

View on arXiv
Comments on this paper