ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.02498
43
0

VISTA: Unsupervised 2D Temporal Dependency Representations for Time Series Anomaly Detection

3 April 2025
Sinchee Chin
Fan Zhang
Xiaochen Yang
Jing-Hao Xue
Wenming Yang
Peng Jia
Guijin Wang
Luo Yingqun
    AI4TS
ArXivPDFHTML
Abstract

Time Series Anomaly Detection (TSAD) is essential for uncovering rare and potentially harmful events in unlabeled time series data. Existing methods are highly dependent on clean, high-quality inputs, making them susceptible to noise and real-world imperfections. Additionally, intricate temporal relationships in time series data are often inadequately captured in traditional 1D representations, leading to suboptimal modeling of dependencies. We introduce VISTA, a training-free, unsupervised TSAD algorithm designed to overcome these challenges. VISTA features three core modules: 1) Time Series Decomposition using Seasonal and Trend Decomposition via Loess (STL) to decompose noisy time series into trend, seasonal, and residual components; 2) Temporal Self-Attention, which transforms 1D time series into 2D temporal correlation matrices for richer dependency modeling and anomaly detection; and 3) Multivariate Temporal Aggregation, which uses a pretrained feature extractor to integrate cross-variable information into a unified, memory-efficient representation. VISTA's training-free approach enables rapid deployment and easy hyperparameter tuning, making it suitable for industrial applications. It achieves state-of-the-art performance on five multivariate TSAD benchmarks.

View on arXiv
@article{chin2025_2504.02498,
  title={ VISTA: Unsupervised 2D Temporal Dependency Representations for Time Series Anomaly Detection },
  author={ Sinchee Chin and Fan Zhang and Xiaochen Yang and Jing-Hao Xue and Wenming Yang and Peng Jia and Guijin Wang and Luo Yingqun },
  journal={arXiv preprint arXiv:2504.02498},
  year={ 2025 }
}
Comments on this paper