ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2003.06468
15
114

GeoDA: a geometric framework for black-box adversarial attacks

13 March 2020
A. Rahmati
Seyed-Mohsen Moosavi-Dezfooli
P. Frossard
H. Dai
    MLAU
    AAML
ArXivPDFHTML
Abstract

Adversarial examples are known as carefully perturbed images fooling image classifiers. We propose a geometric framework to generate adversarial examples in one of the most challenging black-box settings where the adversary can only generate a small number of queries, each of them returning the top-111 label of the classifier. Our framework is based on the observation that the decision boundary of deep networks usually has a small mean curvature in the vicinity of data samples. We propose an effective iterative algorithm to generate query-efficient black-box perturbations with small ℓp\ell_pℓp​ norms for p≥1p \ge 1p≥1, which is confirmed via experimental evaluations on state-of-the-art natural image classifiers. Moreover, for p=2p=2p=2, we theoretically show that our algorithm actually converges to the minimal ℓ2\ell_2ℓ2​-perturbation when the curvature of the decision boundary is bounded. We also obtain the optimal distribution of the queries over the iterations of the algorithm. Finally, experimental results confirm that our principled black-box attack algorithm performs better than state-of-the-art algorithms as it generates smaller perturbations with a reduced number of queries.

View on arXiv
Comments on this paper