ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2510.27680
26
0

PETAR: Localized Findings Generation with Mask-Aware Vision-Language Modeling for PET Automated Reporting

31 October 2025
Danyal Maqbool
Changhee Lee
Zachary Huemann
Samuel Church
Matthew E. Larson
Scott B. Perlman
Tomas A. Romero
Joshua Warner
Meghan G. Lubner
Xin Tie
J. Merkow
Junjie Hu
Steve Y. Cho
Tyler Bradshaw
    VLM
ArXiv (abs)PDFHTML
Main:7 Pages
4 Figures
Bibliography:2 Pages
5 Tables
Abstract

Recent advances in vision-language models (VLMs) have enabled impressive multimodal reasoning, yet most medical applications remain limited to 2D imaging. In this work, we extend VLMs to 3D positron emission tomography and computed tomography (PET/CT), a domain characterized by large volumetric data, small and dispersed lesions, and lengthy radiology reports. We introduce a large-scale dataset comprising over 11,000 lesion-level descriptions paired with 3D segmentations from more than 5,000 PET/CT exams, extracted via a hybrid rule-based and large language model (LLM) pipeline. Building upon this dataset, we propose PETAR-4B, a 3D mask-aware vision-language model that integrates PET, CT, and lesion contours for spatially grounded report generation. PETAR bridges global contextual reasoning with fine-grained lesion awareness, producing clinically coherent and localized findings. Comprehensive automated and human evaluations demonstrate that PETAR substantially improves PET/CT report generation quality, advancing 3D medical vision-language understanding.

View on arXiv
Comments on this paper