ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.11132
26
0

Gender Fairness of Machine Learning Algorithms for Pain Detection

10 June 2025
Dylan Green
Yuting Shang
Jiaee Cheong
Yang Liu
Hatice Gunes
ArXiv (abs)PDFHTML
Main:6 Pages
2 Figures
Bibliography:3 Pages
5 Tables
Abstract

Automated pain detection through machine learning (ML) and deep learning (DL) algorithms holds significant potential in healthcare, particularly for patients unable to self-report pain levels. However, the accuracy and fairness of these algorithms across different demographic groups (e.g., gender) remain under-researched. This paper investigates the gender fairness of ML and DL models trained on the UNBC-McMaster Shoulder Pain Expression Archive Database, evaluating the performance of various models in detecting pain based solely on the visual modality of participants' facial expressions. We compare traditional ML algorithms, Linear Support Vector Machine (L SVM) and Radial Basis Function SVM (RBF SVM), with DL methods, Convolutional Neural Network (CNN) and Vision Transformer (ViT), using a range of performance and fairness metrics. While ViT achieved the highest accuracy and a selection of fairness metrics, all models exhibited gender-based biases. These findings highlight the persistent trade-off between accuracy and fairness, emphasising the need for fairness-aware techniques to mitigate biases in automated healthcare systems.

View on arXiv
@article{green2025_2506.11132,
  title={ Gender Fairness of Machine Learning Algorithms for Pain Detection },
  author={ Dylan Green and Yuting Shang and Jiaee Cheong and Yang Liu and Hatice Gunes },
  journal={arXiv preprint arXiv:2506.11132},
  year={ 2025 }
}
Comments on this paper