Prompt to Polyp: Medical Text-Conditioned Image Synthesis with Diffusion Models

The generation of realistic medical images from text descriptions has significant potential to address data scarcity challenges in healthcare AI while preserving patient privacy. This paper presents a comprehensive study of text-to-image synthesis in the medical domain, comparing two distinct approaches: (1) fine-tuning large pre-trained latent diffusion models and (2) training small, domain-specific models. We introduce a novel model named MSDM, an optimized architecture based on Stable Diffusion that integrates a clinical text encoder, variational autoencoder, and cross-attention mechanisms to better align medical text prompts with generated images. Our study compares two approaches: fine-tuning large pre-trained models (FLUX, Kandinsky) versus training compact domain-specific models (MSDM). Evaluation across colonoscopy (MedVQA-GI) and radiology (ROCOv2) datasets reveals that while large models achieve higher fidelity, our optimized MSDM delivers comparable quality with lower computational costs. Quantitative metrics and qualitative evaluations by medical experts reveal strengths and limitations of each approach.
View on arXiv@article{chaichuk2025_2505.05573, title={ Prompt to Polyp: Medical Text-Conditioned Image Synthesis with Diffusion Models }, author={ Mikhail Chaichuk and Sushant Gautam and Steven Hicks and Elena Tutubalina }, journal={arXiv preprint arXiv:2505.05573}, year={ 2025 } }