ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.06836
11
0

Harnessing Vision-Language Models for Time Series Anomaly Detection

7 June 2025
Zelin He
Sarah Alnegheimish
Matthew Reimherr
    AI4TSVLM
ArXiv (abs)PDFHTML
Main:9 Pages
7 Figures
Bibliography:3 Pages
15 Tables
Appendix:7 Pages
Abstract

Time-series anomaly detection (TSAD) has played a vital role in a variety of fields, including healthcare, finance, and industrial monitoring. Prior methods, which mainly focus on training domain-specific models on numerical data, lack the visual-temporal reasoning capacity that human experts have to identify contextual anomalies. To fill this gap, we explore a solution based on vision language models (VLMs). Recent studies have shown the ability of VLMs for visual reasoning tasks, yet their direct application to time series has fallen short on both accuracy and efficiency. To harness the power of VLMs for TSAD, we propose a two-stage solution, with (1) ViT4TS, a vision-screening stage built on a relatively lightweight pretrained vision encoder, which leverages 2-D time-series representations to accurately localize candidate anomalies; (2) VLM4TS, a VLM-based stage that integrates global temporal context and VLM reasoning capacity to refine the detection upon the candidates provided by ViT4TS. We show that without any time-series training, VLM4TS outperforms time-series pretrained and from-scratch baselines in most cases, yielding a 24.6 percent improvement in F1-max score over the best baseline. Moreover, VLM4TS also consistently outperforms existing language-model-based TSAD methods and is on average 36 times more efficient in token usage.

View on arXiv
@article{he2025_2506.06836,
  title={ Harnessing Vision-Language Models for Time Series Anomaly Detection },
  author={ Zelin He and Sarah Alnegheimish and Matthew Reimherr },
  journal={arXiv preprint arXiv:2506.06836},
  year={ 2025 }
}
Comments on this paper