Towards Universal Certified Robustness with Multi-Norm Training

Existing certified training methods can only train models to be robust against a certain perturbation type (e.g. or ). However, an certifiably robust model may not be certifiably robust against perturbation (and vice versa) and also has low robustness against other perturbations (e.g. geometric and patch transformation). By constructing a theoretical framework to analyze and mitigate the tradeoff, we propose the first multi-norm certified training framework \textbf{CURE}, consisting of several multi-norm certified training methods, to attain better \emph{union robustness} when training from scratch or fine-tuning a pre-trained certified model. Inspired by our theoretical findings, we devise bound alignment and connect natural training with certified training for better union robustness. Compared with SOTA-certified training, \textbf{CURE} improves union robustness to on MNIST, on CIFAR-10, and on TinyImagenet across different epsilon values. It leads to better generalization on a diverse set of challenging unseen geometric and patch perturbations to and on CIFAR-10. Overall, our contributions pave a path towards \textit{universal certified robustness}.
View on arXiv