174

Step by Step Loss Goes Very Far: Multi-Step Quantization for Adversarial Text Attacks

Conference of the European Chapter of the Association for Computational Linguistics (EACL), 2023
Abstract

We propose a novel gradient-based attack against transformer-based language models that searches for an adversarial example in a continuous space of token probabilities. Our algorithm mitigates the gap between adversarial loss for continuous and discrete text representations by performing multi-step quantization in a quantization-compensation loop. Experiments show that our method significantly outperforms other approaches on various natural language processing (NLP) tasks.

View on arXiv
Comments on this paper