ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.21041
31
0

Fast Adversarial Training against Sparse Attacks Requires Loss Smoothing

28 February 2025
Xuyang Zhong
Yixiao Huang
Chen Liu
    AAML
ArXivPDFHTML
Abstract

This paper studies fast adversarial training against sparse adversarial perturbations bounded by l0l_0l0​ norm. We demonstrate the challenges of employing 111-step attacks on l0l_0l0​ bounded perturbations for fast adversarial training, including degraded performance and the occurrence of catastrophic overfitting (CO). We highlight that CO in l0l_0l0​ adversarial training is caused by sub-optimal perturbation locations of 111-step attack. Theoretical and empirical analyses reveal that the loss landscape of l0l_0l0​ adversarial training is more craggy compared to its l∞l_\inftyl∞​, l2l_2l2​ and l1l_1l1​ counterparts. Moreover, we corroborate that the craggy loss landscape can aggravate CO. To address these issues, we propose Fast-LS-l0l_0l0​ that incorporates soft labels and the trade-off loss function to smooth the adversarial loss landscape. Extensive experiments demonstrate our method can overcome the challenge of catastrophic overfitting, achieve state-of-the-art performance, and narrow down the performance gap between 111-step and multi-step adversarial training against sparse attacks.

View on arXiv
@article{zhong2025_2502.21041,
  title={ Fast Adversarial Training against Sparse Attacks Requires Loss Smoothing },
  author={ Xuyang Zhong and Yixiao Huang and Chen Liu },
  journal={arXiv preprint arXiv:2502.21041},
  year={ 2025 }
}
Comments on this paper