ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.09504
27
0

MADLLM: Multivariate Anomaly Detection via Pre-trained LLMs

13 April 2025
Wei Tao
Xiaoyang Qu
Kai Lu
Jiguang Wan
Guokuan Li
Jianzong Wang
ArXivPDFHTML
Abstract

When applying pre-trained large language models (LLMs) to address anomaly detection tasks, the multivariate time series (MTS) modality of anomaly detection does not align with the text modality of LLMs. Existing methods simply transform the MTS data into multiple univariate time series sequences, which can cause many problems. This paper introduces MADLLM, a novel multivariate anomaly detection method via pre-trained LLMs. We design a new triple encoding technique to align the MTS modality with the text modality of LLMs. Specifically, this technique integrates the traditional patch embedding method with two novel embedding approaches: Skip Embedding, which alters the order of patch processing in traditional methods to help LLMs retain knowledge of previous features, and Feature Embedding, which leverages contrastive learning to allow the model to better understand the correlations between different features. Experimental results demonstrate that our method outperforms state-of-the-art methods in various public anomaly detection datasets.

View on arXiv
@article{tao2025_2504.09504,
  title={ MADLLM: Multivariate Anomaly Detection via Pre-trained LLMs },
  author={ Wei Tao and Xiaoyang Qu and Kai Lu and Jiguang Wan and Guokuan Li and Jianzong Wang },
  journal={arXiv preprint arXiv:2504.09504},
  year={ 2025 }
}
Comments on this paper