ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2407.11480
21
4

AIGC for Industrial Time Series: From Deep Generative Models to Large Generative Models

16 July 2024
Lei Ren
Haiteng Wang
Yang Tang
Yang Tang
Chunhua Yang
    AI4TS
    AI4CE
ArXivPDFHTML
Abstract

With the remarkable success of generative models like ChatGPT, Artificial Intelligence Generated Content (AIGC) is undergoing explosive development. Not limited to text and images, generative models can generate industrial time series data, addressing challenges such as the difficulty of data collection and data annotation. Due to their outstanding generation ability, they have been widely used in Internet of Things, metaverse, and cyber-physical-social systems to enhance the efficiency of industrial production. In this paper, we present a comprehensive overview of generative models for industrial time series from deep generative models (DGMs) to large generative models (LGMs). First, a DGM-based AIGC framework is proposed for industrial time series generation. Within this framework, we survey advanced industrial DGMs and present a multi-perspective categorization. Furthermore, we systematically analyze the critical technologies required to construct industrial LGMs from four aspects: large-scale industrial dataset, LGMs architecture for complex industrial characteristics, self-supervised training for industrial time series, and fine-tuning of industrial downstream tasks. Finally, we conclude the challenges and future directions to enable the development of generative models in industry.

View on arXiv
@article{ren2025_2407.11480,
  title={ AIGC for Industrial Time Series: From Deep Generative Models to Large Generative Models },
  author={ Lei Ren and Haiteng Wang and Jinwang Li and Yang Tang and Chunhua Yang },
  journal={arXiv preprint arXiv:2407.11480},
  year={ 2025 }
}
Comments on this paper