ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.14318
108
0
v1v2 (latest)

RADAR: Enhancing Radiology Report Generation with Supplementary Knowledge Injection

20 May 2025
Wenjun Hou
Yi Cheng
Kaishuai Xu
Heng Li
Yan Hu
Wenjie Li
Jiang Liu
ArXiv (abs)PDFHTML
Main:9 Pages
6 Figures
Bibliography:4 Pages
9 Tables
Appendix:3 Pages
Abstract

Large language models (LLMs) have demonstrated remarkable capabilities in various domains, including radiology report generation. Previous approaches have attempted to utilize multimodal LLMs for this task, enhancing their performance through the integration of domain-specific knowledge retrieval. However, these approaches often overlook the knowledge already embedded within the LLMs, leading to redundant information integration and inefficient utilization of learned representations. To address this limitation, we propose RADAR, a framework for enhancing radiology report generation with supplementary knowledge injection. RADAR improves report generation by systematically leveraging both the internal knowledge of an LLM and externally retrieved information. Specifically, it first extracts the model's acquired knowledge that aligns with expert image-based classification outputs. It then retrieves relevant supplementary knowledge to further enrich this information. Finally, by aggregating both sources, RADAR generates more accurate and informative radiology reports. Extensive experiments on MIMIC-CXR, CheXpert-Plus, and IU X-ray demonstrate that our model outperforms state-of-the-art LLMs in both language quality and clinical accuracy

View on arXiv
@article{hou2025_2505.14318,
  title={ RADAR: Enhancing Radiology Report Generation with Supplementary Knowledge Injection },
  author={ Wenjun Hou and Yi Cheng and Kaishuai Xu and Heng Li and Yan Hu and Wenjie Li and Jiang Liu },
  journal={arXiv preprint arXiv:2505.14318},
  year={ 2025 }
}
Comments on this paper