ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2110.06850
171
33
v1v2v3v4 (latest)

Boosting the Certified Robustness of L-infinity Distance Nets

13 October 2021
Bohang Zhang
Du Jiang
Di He
Liwei Wang
    OOD
ArXiv (abs)PDFHTMLGithub (22★)
Abstract

Recently, Zhang et al. (2021) developed a new neural network architecture based on ℓ∞\ell_\inftyℓ∞​-distance functions, which naturally possesses certified robustness by its construction. Despite the excellent theoretical properties, the model so far can only achieve comparable performance to conventional networks. In this paper, we significantly boost the certified robustness of ℓ∞\ell_\inftyℓ∞​-distance nets through a careful analysis of its training process. In particular, we show the ℓp\ell_pℓp​-relaxation, a crucial way to overcome the non-smoothness of the model, leads to an unexpected large Lipschitz constant at the early training stage. This makes the optimization insufficient using hinge loss and produces sub-optimal solutions. Given these findings, we propose a simple approach to address the issues above by using a novel objective function that combines a scaled cross-entropy loss with clipped hinge loss. Our experiments show that using the proposed training strategy, the certified accuracy of ℓ∞\ell_\inftyℓ∞​-distance net can be dramatically improved from 33.30% to 40.06% on CIFAR-10 (ϵ=8/255\epsilon=8/255ϵ=8/255), meanwhile significantly outperforming other approaches in this area. Such a result clearly demonstrates the effectiveness and potential of ℓ∞\ell_\inftyℓ∞​-distance net for certified robustness.

View on arXiv
Comments on this paper