286
v1v2v3v4v5 (latest)

RelCon: Relative Contrastive Learning for a Motion Foundation Model for Wearable Data

International Conference on Learning Representations (ICLR), 2024
Main:10 Pages
5 Figures
Bibliography:7 Pages
5 Tables
Appendix:2 Pages
Abstract

We present RelCon, a novel self-supervised Relative Contrastive learning approach for training a motion foundation model from wearable accelerometry sensors. First, a learnable distance measure is trained to capture motif similarity and domain-specific semantic information such as rotation invariance. Then, the learned distance provides a measurement of semantic similarity between a pair of accelerometry time-series, which we use to train our foundation model to model relative relationships across time and across subjects. The foundation model is trained on 1 billion segments from 87,376 participants, and achieves state-of-the-art performance across multiple downstream tasks, including human activity recognition and gait metric regression. To our knowledge, we are the first to show the generalizability of a foundation model with motion data from wearables across distinct evaluation tasks.

View on arXiv
Comments on this paper