ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.08015
57
0

GPT-PPG: A GPT-based Foundation Model for Photoplethysmography Signals

11 March 2025
Zhaoliang Chen
C. Ding
Saurabh Kataria
Runze Yan
Minxiao Wang
Randall J Lee
Xiao Hu
    LM&MA
    MedIm
ArXivPDFHTML
Abstract

This study introduces a novel application of a Generative Pre-trained Transformer (GPT) model tailored for photoplethysmography (PPG) signals, serving as a foundation model for various downstream tasks. Adapting the standard GPT architecture to suit the continuous characteristics of PPG signals, our approach demonstrates promising results. Our models are pre-trained on our extensive dataset that contains more than 200 million 30s PPG samples. We explored different supervised fine-tuning techniques to adapt our model to downstream tasks, resulting in performance comparable to or surpassing current state-of-the-art (SOTA) methods in tasks like atrial fibrillation detection. A standout feature of our GPT model is its inherent capability to perform generative tasks such as signal denoising effectively, without the need for further fine-tuning. This success is attributed to the generative nature of the GPT framework.

View on arXiv
@article{chen2025_2503.08015,
  title={ GPT-PPG: A GPT-based Foundation Model for Photoplethysmography Signals },
  author={ Zhaoliang Chen and Cheng Ding and Saurabh Kataria and Runze Yan and Minxiao Wang and Randall Lee and Xiao Hu },
  journal={arXiv preprint arXiv:2503.08015},
  year={ 2025 }
}
Comments on this paper