14
0

CheXLearner: Text-Guided Fine-Grained Representation Learning for Progression Detection

Abstract

Temporal medical image analysis is essential for clinical decision-making, yet existing methods either align images and text at a coarse level - causing potential semantic mismatches - or depend solely on visual information, lacking medical semantic integration. We present CheXLearner, the first end-to-end framework that unifies anatomical region detection, Riemannian manifold-based structure alignment, and fine-grained regional semantic guidance. Our proposed Med-Manifold Alignment Module (Med-MAM) leverages hyperbolic geometry to robustly align anatomical structures and capture pathologically meaningful discrepancies across temporal chest X-rays. By introducing regional progression descriptions as supervision, CheXLearner achieves enhanced cross-modal representation learning and supports dynamic low-level feature optimization. Experiments show that CheXLearner achieves 81.12% (+17.2%) average accuracy and 80.32% (+11.05%) F1-score on anatomical region progression detection - substantially outperforming state-of-the-art baselines, especially in structurally complex regions. Additionally, our model attains a 91.52% average AUC score in downstream disease classification, validating its superior feature representation.

View on arXiv
@article{wang2025_2505.06903,
  title={ CheXLearner: Text-Guided Fine-Grained Representation Learning for Progression Detection },
  author={ Yuanzhuo Wang and Junwen Duan and Xinyu Li and Jianxin Wang },
  journal={arXiv preprint arXiv:2505.06903},
  year={ 2025 }
}
Comments on this paper