15
0

How stealthy is stealthy? Studying the Efficacy of Black-Box Adversarial Attacks in the Real World

Main:12 Pages
9 Figures
Bibliography:3 Pages
5 Tables
Abstract

Deep learning systems, critical in domains like autonomous vehicles, are vulnerable to adversarial examples (crafted inputs designed to mislead classifiers). This study investigates black-box adversarial attacks in computer vision. This is a realistic scenario, where attackers have query-only access to the target model. Three properties are introduced to evaluate attack feasibility: robustness to compression, stealthiness to automatic detection, and stealthiness to human inspection. State-of-the-Art methods tend to prioritize one criterion at the expense of others. We propose ECLIPSE, a novel attack method employing Gaussian blurring on sampled gradients and a local surrogate model. Comprehensive experiments on a public dataset highlight ECLIPSE's advantages, demonstrating its contribution to the trade-off between the three properties.

View on arXiv
@article{panebianco2025_2506.05382,
  title={ How stealthy is stealthy? Studying the Efficacy of Black-Box Adversarial Attacks in the Real World },
  author={ Francesco Panebianco and Mario DÓnghia and Stefano Zanero aand Michele Carminati },
  journal={arXiv preprint arXiv:2506.05382},
  year={ 2025 }
}
Comments on this paper