ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.19519
28
0

Towards Imperceptible Adversarial Attacks for Time Series Classification with Local Perturbations and Frequency Analysis

25 March 2025
Wenwei Gu
Renyi Zhong
Jianping Zhang
Michael R. Lyu
    AAML
ArXivPDFHTML
Abstract

Adversarial attacks in time series classification (TSC) models have recently gained attention due to their potential to compromise model robustness. Imperceptibility is crucial, as adversarial examples detected by the human vision system (HVS) can render attacks ineffective. Many existing methods fail to produce high-quality imperceptible examples, often generating perturbations with more perceptible low-frequency components, like square waves, and global perturbations that reduce stealthiness. This paper aims to improve the imperceptibility of adversarial attacks on TSC models by addressing frequency components and time series locality. We propose the Shapelet-based Frequency-domain Attack (SFAttack), which uses local perturbations focused on time series shapelets to enhance discriminative information and stealthiness. Additionally, we introduce a low-frequency constraint to confine perturbations to high-frequency components, enhancing imperceptibility.

View on arXiv
@article{gu2025_2503.19519,
  title={ Towards Imperceptible Adversarial Attacks for Time Series Classification with Local Perturbations and Frequency Analysis },
  author={ Wenwei Gu and Renyi Zhong and Jianping Zhang and Michael R. Lyu },
  journal={arXiv preprint arXiv:2503.19519},
  year={ 2025 }
}
Comments on this paper