ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2403.08002
49
9

Towards a clinically accessible radiology foundation model: open-access and lightweight, with automated evaluation

12 March 2024
Juan Manuel Zambrano Chaves
Shih-Cheng Huang
Yanbo Xu
Hanwen Xu
Naoto Usuyama
Sheng Zhang
Fei Wang
Yujia Xie
Mahmoud Khademi
Ziyi Yang
Hany Awadalla
Julia Gong
Houdong Hu
Jianwei Yang
Chunyuan Li
Jianfeng Gao
Yu Gu
Cliff Wong
Mu-Hsin Wei
Tristan Naumann
Muhao Chen
M. Lungren
Akshay S. Chaudhari
Serena Yeung-Levy
Curtis P. Langlotz
Sheng Wang
Hoifung Poon
    VLM
    LM&MA
ArXivPDFHTML
Abstract

The scaling laws and extraordinary performance of large foundation models motivate the development and utilization of such models in biomedicine. However, despite early promising results on some biomedical benchmarks, there are still major challenges that need to be addressed before these models can be used in real-world clinics. Frontier general-domain models such as GPT-4V still have significant performance gaps in multimodal biomedical applications. More importantly, less-acknowledged pragmatic issues, including accessibility, model cost, and tedious manual evaluation make it hard for clinicians to use state-of-the-art large models directly on private patient data. Here, we explore training open-source small multimodal models (SMMs) to bridge competency gaps for unmet clinical needs in radiology. To maximize data efficiency, we adopt a modular approach by incorporating state-of-the-art pre-trained models for image and text modalities, and focusing on training a lightweight adapter to ground each modality to the text embedding space, as exemplified by LLaVA-Med. For training, we assemble a large dataset of over 697 thousand radiology image-text pairs. For evaluation, we propose CheXprompt, a GPT-4-based metric for factuality evaluation, and demonstrate its parity with expert evaluation. For best practice, we conduct a systematic ablation study on various choices in data engineering and multimodal training. The resulting LlaVA-Rad (7B) model attains state-of-the-art results on standard radiology tasks such as report generation and cross-modal retrieval, even outperforming much larger models such as GPT-4V and Med-PaLM M (84B). The inference of LlaVA-Rad is fast and can be performed on a single V100 GPU in private settings, offering a promising state-of-the-art tool for real-world clinical applications.

View on arXiv
Comments on this paper