ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.01924
43
0

TAET: Two-Stage Adversarial Equalization Training on Long-Tailed Distributions

2 March 2025
Wang YuHang
Junkang Guo
Aolei Liu
Kaihao Wang
Zaitong Wu
Zhenyu Liu
Wenfei Yin
Jian Liu
    AAML
ArXivPDFHTML
Abstract

Adversarial robustness is a critical challenge in deploying deep neural networks for real-world applications. While adversarial training is a widely recognized defense strategy, most existing studies focus on balanced datasets, overlooking the prevalence of long-tailed distributions in real-world data, which significantly complicates robustness. This paper provides a comprehensive analysis of adversarial training under long-tailed distributions and identifies limitations in the current state-of-the-art method, AT-BSL, in achieving robust performance under such conditions. To address these challenges, we propose a novel training framework, TAET, which integrates an initial stabilization phase followed by a stratified equalization adversarial training phase. Additionally, prior work on long-tailed robustness has largely ignored the crucial evaluation metric of balanced accuracy. To bridge this gap, we introduce the concept of balanced robustness, a comprehensive metric tailored for assessing robustness under long-tailed distributions. Extensive experiments demonstrate that our method surpasses existing advanced defenses, achieving significant improvements in both memory and computational efficiency. This work represents a substantial advancement in addressing robustness challenges in real-world applications. Our code is available at:this https URL.

View on arXiv
@article{yuhang2025_2503.01924,
  title={ TAET: Two-Stage Adversarial Equalization Training on Long-Tailed Distributions },
  author={ Wang YuHang and Junkang Guo and Aolei Liu and Kaihao Wang and Zaitong Wu and Zhenyu Liu and Wenfei Yin and Jian Liu },
  journal={arXiv preprint arXiv:2503.01924},
  year={ 2025 }
}
Comments on this paper