ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.02260
282
0
v1v2v3 (latest)

MoCA: Multi-modal Cross-masked Autoencoder for Digital Health Measurements

2 June 2025
Howon Ryu
Y. Chen
Yacun Wang
Andrea Z. LaCroix
Chongzhi Di
L. Natarajan
Yu Wang
Jingjing Zou
ArXiv (abs)PDFHTMLGithub (1★)
Main:21 Pages
11 Figures
Bibliography:5 Pages
10 Tables
Appendix:8 Pages
Abstract

The growing prevalence of digital health technologies has led to the generation of complex multi-modal data, such as physical activity measurements simultaneously collected from various sensors of mobile and wearable devices. These data hold immense potential for advancing health studies, but current methods predominantly rely on supervised learning, requiring extensive labeled datasets that are often expensive or impractical to obtain, especially in clinical studies. To address this limitation, we propose a self-supervised learning framework called Multi-modal Cross-masked Autoencoder (MoCA) that leverages cross-modality masking and the Transformer autoencoder architecture to utilize both temporal correlations within modalities and cross-modal correlations between data streams. We also provide theoretical guarantees to support the effectiveness of the cross-modality masking scheme in MoCA. Comprehensive experiments and ablation studies demonstrate that our method outperforms existing approaches in both reconstruction and downstream tasks. We release open-source code for data processing, pre-training, and downstream tasks in the supplementary materials. This work highlights the transformative potential of self-supervised learning in digital health and multi-modal data.

View on arXiv
Comments on this paper