171

Boosting the Certified Robustness of L-infinity Distance Nets

Abstract

Recently, Zhang et al.(2021) developed a new neural network architecture based on \ell_\infty-distance functions, which naturally possesses certified \ell_\infty robustness by its construction. Despite rigorous theoretical guarantees, the model so far can only achieve comparable performance to conventional networks. In this paper, we make the following two contributions: (i)\mathrm{(i)} We demonstrate that \ell_\infty-distance nets enjoy a fundamental advantage in certified robustness over conventional networks (under typical certification approaches); (ii)\mathrm{(ii)} With an improved training process we are able to significantly boost the certified accuracy of \ell_\infty-distance nets. Our training approach largely alleviates the optimization problem that arose in the previous training scheme, in particular, the unexpected large Lipschitz constant due to the use of a crucial trick called p\ell_p-relaxation. The core of our training approach is a novel objective function that combines scaled cross-entropy loss and clipped hinge loss with a decaying mixing coefficient. Experiments show that using the proposed training strategy, the certified accuracy of \ell_\infty-distance net can be dramatically improved from 33.30% to 40.06% on CIFAR-10 (ϵ=8/255\epsilon=8/255), meanwhile outperforming other approaches in this area by a large margin. Our results clearly demonstrate the effectiveness and potential of \ell_\infty-distance net for certified robustness. Codes are available at https://github.com/zbh2047/L_inf-dist-net-v2.

View on arXiv
Comments on this paper