82

Downscaling Attacks: What You See is Not What You Get

Abstract

The resizing of images, which is typically a required part of preprocessing for computer vision systems, is vulnerable to attack. We show that images can be created such that the image is completely different at machine-vision scales than at other scales. The default settings for some common computer vision and machine learning systems are vulnerable although defenses exist and are trivial to administer provided that defenders are aware of the threat. These attacks and defenses help to establish the role of input sanitization in machine learning.

View on arXiv
Comments on this paper