Multimodal data matters: language model pre-training over structured and
unstructured electronic health records
The massive amount of electronic health records (EHRs) has created enormous potential for improving healthcare, among which clinical codes (structured data) and clinical narratives (unstructured data) are two important textual modalities. Most existing EHR-oriented studies, however, either only focus on a particular modality or integrate data from different modalities in a shallow manner, which ignores the intrinsic interactions between them. To address these issues, we proposed a Medical Multimodal Pre-trained Language Model, named MedM-PLM, to learn enhanced EHR representations over structured and unstructured data. In MedM-PLM, two Transformer-based neural networks components are firstly adopted to learn representative characteristics from each modality. A cross-modal module is then introduced to model their interactions. We pre-trained MedM-PLM on the MIMIC-III dataset and verified the effectiveness of the model on three downstream clinical tasks, i.e., medication recommendation, 30-day readmission, and ICD coding. Extensive experiments demonstrate the power of MedM-PLM compared with state-of-the-art methods. Further analyses and visualizations show the robustness of our model which could potentially provide more comprehensive interpretations for clinical decision-making.
View on arXiv