Recent work has shown how easily white-box adversarial attacks can be applied to state-of-the-art image classifiers. However, real-life scenarios resemble more the black-box adversarial conditions, lacking transparency and usually imposing natural, hard constraints on the query budget. We propose , a black-box adversarial attack based on a surprisingly simple evolutionary search strategy. is query-efficient, minimizes adversarial perturbations, and does not require any form of training. shows efficiency and efficacy through results that are in line with much more complex state-of-the-art black-box attacks such as . It is more query-efficient than , a simple and powerful baseline black-box attack, and has a similar level of complexity. Therefore, we propose it both as a new strong baseline for black-box adversarial attacks and as a fast and general tool for gaining empirical insight into how robust image classifiers are with respect to adversarial perturbations. There exist fast and reliable black-box attacks, such as , and black-box attacks, such as . We propose as a query-efficient black-box adversarial attack which, together with the aforementioned methods, can serve as a generic tool to assess the empirical robustness of image classifiers. The main advantages of such methods are that they run fast, are query-efficient, and can easily be integrated in image classifiers development pipelines. While our attack minimises the adversarial perturbation, we also report , and notice that we compare favorably to the state-of-the-art black-box attack, , and of the strong baseline, .
View on arXiv