ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2311.18681
31
0

RaDialog: A Large Vision-Language Model for Radiology Report Generation and Conversational Assistance

30 November 2023
Chantal Pellegrini
Ege Ozsoy
Benjamin Busam
Nassir Navab
Matthias Keicher
    MedIm
    LM&MA
ArXivPDFHTML
Abstract

Conversational AI tools that can generate and discuss clinically correct radiology reports for a given medical image have the potential to transform radiology. Such a human-in-the-loop radiology assistant could facilitate a collaborative diagnostic process, thus saving time and improving the quality of reports. Towards this goal, we introduce RaDialog, the first thoroughly evaluated and publicly available large vision-language model for radiology report generation and interactive dialog. RaDialog effectively integrates visual image features and structured pathology findings with a large language model (LLM) while simultaneously adapting it to a specialized domain using parameter-efficient fine-tuning. To keep the conversational abilities of the underlying LLM, we propose a comprehensive, semi-automatically labeled, image-grounded instruct dataset for chest X-ray radiology tasks. By training with this dataset, our method achieves state-of-the-art clinical correctness in report generation and shows impressive abilities in interactive tasks such as correcting reports and answering questions, serving as a foundational step toward clinical dialog systems. Our code is available on github:this https URL.

View on arXiv
@article{pellegrini2025_2311.18681,
  title={ RaDialog: A Large Vision-Language Model for Radiology Report Generation and Conversational Assistance },
  author={ Chantal Pellegrini and Ege Özsoy and Benjamin Busam and Nassir Navab and Matthias Keicher },
  journal={arXiv preprint arXiv:2311.18681},
  year={ 2025 }
}
Comments on this paper