ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2408.07869
30
1

A Systematic Evaluation of Generated Time Series and Their Effects in Self-Supervised Pretraining

15 August 2024
Audrey Der
Chin-Chia Michael Yeh
Xin Dai
Huiyuan Chen
Yan Zheng
Yujie Fan
Zhongfang Zhuang
Vivian Lai
Junpeng Wang
Liang Wang
Wei Zhang
Eamonn J. Keogh
    AI4TS
ArXivPDFHTML
Abstract

Self-supervised Pretrained Models (PTMs) have demonstrated remarkable performance in computer vision and natural language processing tasks. These successes have prompted researchers to design PTMs for time series data. In our experiments, most self-supervised time series PTMs were surpassed by simple supervised models. We hypothesize this undesired phenomenon may be caused by data scarcity. In response, we test six time series generation methods, use the generated data in pretraining in lieu of the real data, and examine the effects on classification performance. Our results indicate that replacing a real-data pretraining set with a greater volume of only generated samples produces noticeable improvement.

View on arXiv
Comments on this paper