Boosting the Certified Robustness of L-infinity Distance Nets
- OOD

Recently, Zhang et al.(2021) developed a new neural network architecture based on -distance functions, which naturally possesses certified robustness by its construction. Despite excellent theoretical properties, the model so far can only achieve comparable performance to conventional networks. In this paper, we significantly boost the certified robustness of -distance nets through a careful analysis of the training process. In particular, we show the -relaxation, a crucial way to overcome the non-smoothness of the model, leads to an unexpected large Lipschitz constant at the early training stage. This makes the optimization insufficient using hinge loss and produces sub-optimal solutions. Given these findings, we propose a simple approach to address the above issue by designing a novel objective function that combines scaled cross-entropy loss with clipped hinge loss. Experiments show that using the proposed training strategy, the certified accuracy of -distance net can be dramatically improved from 33.30% to 40.06% on CIFAR-10 (), meanwhile significantly outperforming other approaches in this area. Our results clearly demonstrate the effectiveness and potential of -distance net for certified robustness. Codes are available at https://github.com/zbh2047/L_inf-dist-net-v2.
View on arXiv