9
0

Unleashing the Power of Pre-trained Encoders for Universal Adversarial Attack Detection

Abstract

Adversarial attacks pose a critical security threat to real-world AI systems by injecting human-imperceptible perturbations into benign samples to induce misclassification in deep learning models. While existing detection methods, such as Bayesian uncertainty estimation and activation pattern analysis, have achieved progress through feature engineering, their reliance on handcrafted feature design and prior knowledge of attack patterns limits generalization capabilities and incurs high engineering costs. To address these limitations, this paper proposes a lightweight adversarial detection framework based on the large-scale pre-trained vision-language model CLIP. Departing from conventional adversarial feature characterization paradigms, we innovatively adopt an anomaly detection perspective. By jointly fine-tuning CLIP's dual visual-text encoders with trainable adapter networks and learnable prompts, we construct a compact representation space tailored for natural images. Notably, our detection architecture achieves substantial improvements in generalization capability across both known and unknown attack patterns compared to traditional methods, while significantly reducing training overhead. This study provides a novel technical pathway for establishing a parameter-efficient and attack-agnostic defense paradigm, markedly enhancing the robustness of vision systems against evolving adversarial threats.

View on arXiv
@article{zhang2025_2504.00429,
  title={ Unleashing the Power of Pre-trained Encoders for Universal Adversarial Attack Detection },
  author={ Yinghe Zhang and Chi Liu and Shuai Zhou and Sheng Shen and Peng Gui },
  journal={arXiv preprint arXiv:2504.00429},
  year={ 2025 }
}
Comments on this paper