ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.18878
28
48

TSRM: A Lightweight Temporal Feature Encoding Architecture for Time Series Forecasting and Imputation

26 April 2025
Robert Leppich
Michael Stenger
Daniel Grillmeyer
Vanessa Borst
Samuel Kounev
    AI4TS
    AI4CE
ArXivPDFHTML
Abstract

We introduce a temporal feature encoding architecture called Time Series Representation Model (TSRM) for multivariate time series forecasting and imputation. The architecture is structured around CNN-based representation layers, each dedicated to an independent representation learning task and designed to capture diverse temporal patterns, followed by an attention-based feature extraction layer and a merge layer, designed to aggregate extracted features. The architecture is fundamentally based on a configuration that is inspired by a Transformer encoder, with self-attention mechanisms at its core. The TSRM architecture outperforms state-of-the-art approaches on most of the seven established benchmark datasets considered in our empirical evaluation for both forecasting and imputation tasks. At the same time, it significantly reduces complexity in the form of learnable parameters. The source code is available atthis https URL.

View on arXiv
@article{leppich2025_2504.18878,
  title={ TSRM: A Lightweight Temporal Feature Encoding Architecture for Time Series Forecasting and Imputation },
  author={ Robert Leppich and Michael Stenger and Daniel Grillmeyer and Vanessa Borst and Samuel Kounev },
  journal={arXiv preprint arXiv:2504.18878},
  year={ 2025 }
}
Comments on this paper