ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2410.05711
37
1

TimeDART: A Diffusion Autoregressive Transformer for Self-Supervised Time Series Representation

24 February 2025
Daoyu Wang
Mingyue Cheng
Z. Liu
Q. Liu
Enhong Chen
    AI4TS
    DiffM
ArXivPDFHTML
Abstract

Self-supervised learning has garnered increasing attention in time series analysis for benefiting various downstream tasks and reducing reliance on labeled data. Despite its effectiveness, existing methods often struggle to comprehensively capture both long-term dynamic evolution and subtle local patterns in a unified manner. In this work, we propose TimeDART, a novel self-supervised time series pre-training framework that unifies two powerful generative paradigms to learn more transferable representations. Specifically, we first employ a causal Transformer encoder, accompanied by a patch-based embedding strategy, to model the evolving trends from left to right. Building on this global modeling, we further introduce a denoising diffusion process to capture fine-grained local patterns through forward diffusion and reverse denoising. Finally, we optimize the model in an autoregressive manner. As a result, TimeDART effectively accounts for both global and local sequence features in a coherent way. We conduct extensive experiments on public datasets for time series forecasting and classification. The experimental results demonstrate that TimeDART consistently outperforms previous compared methods, validating the effectiveness of our approach. Our code is available atthis https URL.

View on arXiv
@article{wang2025_2410.05711,
  title={ TimeDART: A Diffusion Autoregressive Transformer for Self-Supervised Time Series Representation },
  author={ Daoyu Wang and Mingyue Cheng and Zhiding Liu and Qi Liu and Enhong Chen },
  journal={arXiv preprint arXiv:2410.05711},
  year={ 2025 }
}
Comments on this paper