Machine-learning for photoplethysmography analysis: Benchmarking feature, image, and signal-based approaches

Photoplethysmography (PPG) is a widely used non-invasive physiological sensing technique, suitable for various clinical applications. Such clinical applications are increasingly supported by machine learning methods, raising the question of the most appropriate input representation and model choice. Comprehensive comparisons, in particular across different input representations, are scarce. We address this gap in the research landscape by a comprehensive benchmarking study covering three kinds of input representations, interpretable features, image representations and raw waveforms, across prototypical regression and classification use cases: blood pressure and atrial fibrillation prediction. In both cases, the best results are achieved by deep neural networks operating on raw time series as input representations. Within this model class, best results are achieved by modern convolutional neural networks (CNNs). but depending on the task setup, shallow CNNs are often also very competitive. We envision that these results will be insightful for researchers to guide their choice on machine learning tasks for PPG data, even beyond the use cases presented in this work.
View on arXiv@article{moulaeifard2025_2502.19949, title={ Machine-learning for photoplethysmography analysis: Benchmarking feature, image, and signal-based approaches }, author={ Mohammad Moulaeifard and Loic Coquelin and Mantas Rinkevičius and Andrius Sološenko and Oskar Pfeffer and Ciaran Bench and Nando Hegemann and Sara Vardanega and Manasi Nandi and Jordi Alastruey and Christian Heiss and Vaidotas Marozas and Andrew Thompson and Philip J. Aston and Peter H. Charlton and Nils Strodthoff }, journal={arXiv preprint arXiv:2502.19949}, year={ 2025 } }