Adversarial attacks in time series classification (TSC) models have recently gained attention due to their potential to compromise model robustness. Imperceptibility is crucial, as adversarial examples detected by the human vision system (HVS) can render attacks ineffective. Many existing methods fail to produce high-quality imperceptible examples, often generating perturbations with more perceptible low-frequency components, like square waves, and global perturbations that reduce stealthiness. This paper aims to improve the imperceptibility of adversarial attacks on TSC models by addressing frequency components and time series locality. We propose the Shapelet-based Frequency-domain Attack (SFAttack), which uses local perturbations focused on time series shapelets to enhance discriminative information and stealthiness. Additionally, we introduce a low-frequency constraint to confine perturbations to high-frequency components, enhancing imperceptibility.
View on arXiv@article{gu2025_2503.19519, title={ Towards Imperceptible Adversarial Attacks for Time Series Classification with Local Perturbations and Frequency Analysis }, author={ Wenwei Gu and Renyi Zhong and Jianping Zhang and Michael R. Lyu }, journal={arXiv preprint arXiv:2503.19519}, year={ 2025 } }