19
1

A Survey and Evaluation of Adversarial Attacks for Object Detection

Abstract

Deep learning models achieve remarkable accuracy in computer vision tasks, yet remain vulnerable to adversarial examples--carefully crafted perturbations to input images that can deceive these models into making confident but incorrect predictions. This vulnerability pose significant risks in high-stakes applications such as autonomous vehicles, security surveillance, and safety-critical inspection systems. While the existing literature extensively covers adversarial attacks in image classification, comprehensive analyses of such attacks on object detection systems remain limited. This paper presents a novel taxonomic framework for categorizing adversarial attacks specific to object detection architectures, synthesizes existing robustness metrics, and provides a comprehensive empirical evaluation of state-of-the-art attack methodologies on popular object detection models, including both traditional detectors and modern detectors with vision-language pretraining. Through rigorous analysis of open-source attack implementations and their effectiveness across diverse detection architectures, we derive key insights into attack characteristics. Furthermore, we delineate critical research gaps and emerging challenges to guide future investigations in securing object detection systems against adversarial threats. Our findings establish a foundation for developing more robust detection models while highlighting the urgent need for standardized evaluation protocols in this rapidly evolving domain.

View on arXiv
@article{nguyen2025_2408.01934,
  title={ A Survey and Evaluation of Adversarial Attacks for Object Detection },
  author={ Khoi Nguyen Tiet Nguyen and Wenyu Zhang and Kangkang Lu and Yuhuan Wu and Xingjian Zheng and Hui Li Tan and Liangli Zhen },
  journal={arXiv preprint arXiv:2408.01934},
  year={ 2025 }
}
Comments on this paper