22
19

Adversarial Token Attacks on Vision Transformers

Abstract

Vision transformers rely on a patch token based self attention mechanism, in contrast to convolutional networks. We investigate fundamental differences between these two families of models, by designing a block sparsity based adversarial token attack. We probe and analyze transformer as well as convolutional models with token attacks of varying patch sizes. We infer that transformer models are more sensitive to token attacks than convolutional models, with ResNets outperforming Transformer models by up to 30%\sim30\% in robust accuracy for single token attacks.

View on arXiv
Comments on this paper