ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2010.13275
16
16

Asymptotic Behavior of Adversarial Training in Binary Classification

26 October 2020
Hossein Taheri
Ramtin Pedarsani
Christos Thrampoulidis
    AAML
ArXivPDFHTML
Abstract

It has been consistently reported that many machine learning models are susceptible to adversarial attacks i.e., small additive adversarial perturbations applied to data points can cause misclassification. Adversarial training using empirical risk minimization is considered to be the state-of-the-art method for defense against adversarial attacks. Despite being successful in practice, several problems in understanding generalization performance of adversarial training remain open. In this paper, we derive precise theoretical predictions for the performance of adversarial training in binary classification. We consider the high-dimensional regime where the dimension of data grows with the size of the training data-set at a constant ratio. Our results provide exact asymptotics for standard and adversarial test errors of the estimators obtained by adversarial training with ℓq\ell_qℓq​-norm bounded perturbations (q≥1q \ge 1q≥1) for both discriminative binary models and generative Gaussian-mixture models with correlated features. Furthermore, we use these sharp predictions to uncover several intriguing observations on the role of various parameters including the over-parameterization ratio, the data model, and the attack budget on the adversarial and standard errors.

View on arXiv
Comments on this paper