ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.05319
52
0

Robust Multimodal Learning for Ophthalmic Disease Grading via Disentangled Representation

7 March 2025
X. Wang
Yifang Wang
Senwei Liang
Feilong Tang
Chengzhi Liu
Ming Hu
Chao Hu
Junjun He
Zongyuan Ge
Imran Razzak
ArXivPDFHTML
Abstract

This paper discusses how ophthalmologists often rely on multimodal data to improve diagnostic accuracy. However, complete multimodal data is rare in real-world applications due to a lack of medical equipment and concerns about data privacy. Traditional deep learning methods typically address these issues by learning representations in latent space. However, the paper highlights two key limitations of these approaches: (i) Task-irrelevant redundant information (e.g., numerous slices) in complex modalities leads to significant redundancy in latent space representations. (ii) Overlapping multimodal representations make it difficult to extract unique features for each modality. To overcome these challenges, the authors propose the Essence-Point and Disentangle Representation Learning (EDRL) strategy, which integrates a self-distillation mechanism into an end-to-end framework to enhance feature selection and disentanglement for more robust multimodal learning. Specifically, the Essence-Point Representation Learning module selects discriminative features that improve disease grading performance. The Disentangled Representation Learning module separates multimodal data into modality-common and modality-unique representations, reducing feature entanglement and enhancing both robustness and interpretability in ophthalmic disease diagnosis. Experiments on multimodal ophthalmology datasets show that the proposed EDRL strategy significantly outperforms current state-of-the-art methods.

View on arXiv
@article{wang2025_2503.05319,
  title={ Robust Multimodal Learning for Ophthalmic Disease Grading via Disentangled Representation },
  author={ Xinkun Wang and Yifang Wang and Senwei Liang and Feilong Tang and Chengzhi Liu and Ming Hu and Chao Hu and Junjun He and Zongyuan Ge and Imran Razzak },
  journal={arXiv preprint arXiv:2503.05319},
  year={ 2025 }
}
Comments on this paper