ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2310.14573
22
36

Exploring the Boundaries of GPT-4 in Radiology

23 October 2023
Qianchu Liu
Stephanie L. Hyland
Shruthi Bannur
Kenza Bouzid
Daniel Coelho De Castro
Maria T. A. Wetscherek
Robert Tinn
Harshita Sharma
Fernando Pérez-García
Anton Schwaighofer
Pranav Rajpurkar
Sameer Tajdin Khanna
Hoifung Poon
Naoto Usuyama
Anja Thieme
A. Nori
M. Lungren
Ozan Oktay
Javier Alvarez-Valle
    LM&MA
    AI4CE
ArXivPDFHTML
Abstract

The recent success of general-domain large language models (LLMs) has significantly changed the natural language processing paradigm towards a unified foundation model across domains and applications. In this paper, we focus on assessing the performance of GPT-4, the most capable LLM so far, on the text-based applications for radiology reports, comparing against state-of-the-art (SOTA) radiology-specific models. Exploring various prompting strategies, we evaluated GPT-4 on a diverse range of common radiology tasks and we found GPT-4 either outperforms or is on par with current SOTA radiology models. With zero-shot prompting, GPT-4 already obtains substantial gains (≈\approx≈ 10% absolute improvement) over radiology models in temporal sentence similarity classification (accuracy) and natural language inference (F1F_1F1​). For tasks that require learning dataset-specific style or schema (e.g. findings summarisation), GPT-4 improves with example-based prompting and matches supervised SOTA. Our extensive error analysis with a board-certified radiologist shows GPT-4 has a sufficient level of radiology knowledge with only occasional errors in complex context that require nuanced domain knowledge. For findings summarisation, GPT-4 outputs are found to be overall comparable with existing manually-written impressions.

View on arXiv
Comments on this paper