367
v1v2 (latest)

Contrast All the Time: Learning Time Series Representation from Temporal Consistency

Main:7 Pages
8 Figures
Bibliography:2 Pages
9 Tables
Appendix:5 Pages
Abstract

Representation learning for time series using contrastive learning has emerged as a critical technique for improving the performance of downstream tasks. To advance this effective approach, we introduce CaTT (\textit{Contrast All The Time}), a new approach to unsupervised contrastive learning for time series, which takes advantage of dynamics between temporally similar moments more efficiently and effectively than existing methods. CaTT departs from conventional time-series contrastive approaches that rely on data augmentations or selected views. Instead, it uses the full temporal dimension by contrasting all time steps in parallel. This is made possible by a scalable NT-pair formulation, which extends the classic N-pair loss across both batch and temporal dimensions, making the learning process end-to-end and more efficient. CaTT learns directly from the natural structure of temporal data, using repeated or adjacent time steps as implicit supervision, without the need for pair selection heuristics. We demonstrate that this approach produces superior embeddings which allow better performance in downstream tasks. Additionally, training is faster than other contrastive learning approaches, making it suitable for large-scale and real-world time series applications. The source code is publicly available at \href{this https URL}{this https URL}.

View on arXiv
Comments on this paper