ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2205.06127
11
5

Sample Complexity Bounds for Robustly Learning Decision Lists against Evasion Attacks

12 May 2022
Pascale Gourdeau
Varun Kanade
Marta Z. Kwiatkowska
J. Worrell
    AAML
ArXivPDFHTML
Abstract

A fundamental problem in adversarial machine learning is to quantify how much training data is needed in the presence of evasion attacks. In this paper we address this issue within the framework of PAC learning, focusing on the class of decision lists. Given that distributional assumptions are essential in the adversarial setting, we work with probability distributions on the input data that satisfy a Lipschitz condition: nearby points have similar probability. Our key results illustrate that the adversary's budget (that is, the number of bits it can perturb on each input) is a fundamental quantity in determining the sample complexity of robust learning. Our first main result is a sample-complexity lower bound: the class of monotone conjunctions (essentially the simplest non-trivial hypothesis class on the Boolean hypercube) and any superclass has sample complexity at least exponential in the adversary's budget. Our second main result is a corresponding upper bound: for every fixed kkk the class of kkk-decision lists has polynomial sample complexity against a log⁡(n)\log(n)log(n)-bounded adversary. This sheds further light on the question of whether an efficient PAC learning algorithm can always be used as an efficient log⁡(n)\log(n)log(n)-robust learning algorithm under the uniform distribution.

View on arXiv
Comments on this paper