16
0

Towards Adaptive Meta-Gradient Adversarial Examples for Visual Tracking

Abstract

In recent years, visual tracking methods based on convolutional neural networks and Transformers have achieved remarkable performance and have been successfully applied in fields such as autonomous driving. However, the numerous security issues exposed by deep learning models have gradually affected the reliable application of visual tracking methods in real-world scenarios. Therefore, how to reveal the security vulnerabilities of existing visual trackers through effective adversarial attacks has become a critical problem that needs to be addressed. To this end, we propose an adaptive meta-gradient adversarial attack (AMGA) method for visual tracking. This method integrates multi-model ensembles and meta-learning strategies, combining momentum mechanisms and Gaussian smoothing, which can significantly enhance the transferability and attack effectiveness of adversarial examples. AMGA randomly selects models from a large model repository, constructs diverse tracking scenarios, and iteratively performs both white- and black-box adversarial attacks in each scenario, optimizing the gradient directions of each model. This paradigm minimizes the gap between white- and black-box adversarial attacks, thus achieving excellent attack performance in black-box scenarios. Extensive experimental results on large-scale datasets such as OTB2015, LaSOT, and GOT-10k demonstrate that AMGA significantly improves the attack performance, transferability, and deception of adversarial examples. Codes and data are available atthis https URL.

View on arXiv
@article{tian2025_2505.08999,
  title={ Towards Adaptive Meta-Gradient Adversarial Examples for Visual Tracking },
  author={ Wei-Long Tian and Peng Gao and Xiao Liu and Long Xu and Hamido Fujita and Hanan Aljuai and Mao-Li Wang },
  journal={arXiv preprint arXiv:2505.08999},
  year={ 2025 }
}
Comments on this paper