ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.03380
18
0

Reinforced Correlation Between Vision and Language for Precise Medical AI Assistant

6 May 2025
Haonan Wang
Jiaji Mao
Lehan Wang
Qixiang Zhang
Marawan Elbatel
Yi Qin
Huijun Hu
Baoxun Li
Wenhui Deng
Weifeng Qin
H. Li
Jialin Liang
Jun Shen
Xiaomeng Li
    MedIm
ArXivPDFHTML
Abstract

Medical AI assistants support doctors in disease diagnosis, medical image analysis, and report generation. However, they still face significant challenges in clinical use, including limited accuracy with multimodal content and insufficient validation in real-world settings. We propose RCMed, a full-stack AI assistant that improves multimodal alignment in both input and output, enabling precise anatomical delineation, accurate localization, and reliable diagnosis through hierarchical vision-language grounding. A self-reinforcing correlation mechanism allows visual features to inform language context, while language semantics guide pixel-wise attention, forming a closed loop that refines both modalities. This correlation is enhanced by a color region description strategy, translating anatomical structures into semantically rich text to learn shape-location-text relationships across scales. Trained on 20 million image-mask-description triplets, RCMed achieves state-of-the-art precision in contextualizing irregular lesions and subtle anatomical boundaries, excelling in 165 clinical tasks across 9 modalities. It achieved a 23.5% relative improvement in cell segmentation from microscopy images over prior methods. RCMed's strong vision-language alignment enables exceptional generalization, with state-of-the-art performance in external validation across 20 clinically significant cancer types, including novel tasks. This work demonstrates how integrated multimodal models capture fine-grained patterns, enabling human-level interpretation in complex scenarios and advancing human-centric AI healthcare.

View on arXiv
@article{wang2025_2505.03380,
  title={ Reinforced Correlation Between Vision and Language for Precise Medical AI Assistant },
  author={ Haonan Wang and Jiaji Mao and Lehan Wang and Qixiang Zhang and Marawan Elbatel and Yi Qin and Huijun Hu and Baoxun Li and Wenhui Deng and Weifeng Qin and Hongrui Li and Jialin Liang and Jun Shen and Xiaomeng Li },
  journal={arXiv preprint arXiv:2505.03380},
  year={ 2025 }
}
Comments on this paper