Enhancing Multimodal Medical Image Classification using Cross-Graph Modal Contrastive Learning

The classification of medical images is a pivotal aspect of disease diagnosis, often enhanced by deep learning techniques. However, traditional approaches typically focus on unimodal medical image data, neglecting the integration of diverse non-image patient data. This paper proposes a novel Cross-Graph Modal Contrastive Learning (CGMCL) framework for multimodal structured data from different data domains to improve medical image classification. The model effectively integrates both image and non-image data by constructing cross-modality graphs and leveraging contrastive learning to align multimodal features in a shared latent space. An inter-modality feature scaling module further optimizes the representation learning process by reducing the gap between heterogeneous modalities. The proposed approach is evaluated on two datasets: a Parkinson's disease (PD) dataset and a public melanoma dataset. Results demonstrate that CGMCL outperforms conventional unimodal methods in accuracy, interpretability, and early disease prediction. Additionally, the method shows superior performance in multi-class melanoma classification. The CGMCL framework provides valuable insights into medical image classification while offering improved disease interpretability and predictive capabilities.
View on arXiv@article{ding2025_2410.17494, title={ Enhancing Multimodal Medical Image Classification using Cross-Graph Modal Contrastive Learning }, author={ Jun-En Ding and Chien-Chin Hsu and Chi-Hsiang Chu and Shuqiang Wang and Feng Liu }, journal={arXiv preprint arXiv:2410.17494}, year={ 2025 } }