Textual adversarial examples pose serious threats to the reliability of natural language processing systems. Recent studies suggest that adversarial examples tend to deviate from the underlying manifold of normal texts, whereas pre-trained masked language models can approximate the manifold of normal data. These findings inspire the exploration of masked language models for detecting textual adversarial attacks. We first introduce Masked Language Model-based Detection (MLMD), leveraging the mask and unmask operations of the masked language modeling (MLM) objective to induce the difference in manifold changes between normal and adversarial texts. Although MLMD achieves competitive detection performance, its exhaustive one-by-one masking strategy introduces significant computational overhead. Our posterior analysis reveals that a significant number of non-keywords in the input are not important for detection but consume resources. Building on this, we introduce Gradient-guided MLMD (GradMLMD), which leverages gradient information to identify and skip non-keywords during detection, significantly reducing resource consumption without compromising detection performance.
View on arXiv@article{zhang2025_2504.08798, title={ Exploring Gradient-Guided Masked Language Model to Detect Textual Adversarial Attacks }, author={ Xiaomei Zhang and Zhaoxi Zhang and Yanjun Zhang and Xufei Zheng and Leo Yu Zhang and Shengshan Hu and Shirui Pan }, journal={arXiv preprint arXiv:2504.08798}, year={ 2025 } }