ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.09322
45
0

MedIL: Implicit Latent Spaces for Generating Heterogeneous Medical Images at Arbitrary Resolutions

12 April 2025
Tyler A. Spears
Shen Zhu
Yinzhu Jin
A. Shrivastava
P. T. Fletcher
    LM&MA
    MedIm
ArXivPDFHTML
Abstract

In this work, we introduce MedIL, a first-of-its-kind autoencoder built for encoding medical images with heterogeneous sizes and resolutions for image generation. Medical images are often large and heterogeneous, where fine details are of vital clinical importance. Image properties change drastically when considering acquisition equipment, patient demographics, and pathology, making realistic medical image generation challenging. Recent work in latent diffusion models (LDMs) has shown success in generating images resampled to a fixed-size. However, this is a narrow subset of the resolutions native to image acquisition, and resampling discards fine anatomical details. MedIL utilizes implicit neural representations to treat images as continuous signals, where encoding and decoding can be performed at arbitrary resolutions without prior resampling. We quantitatively and qualitatively show how MedIL compresses and preserves clinically-relevant features over large multi-site, multi-resolution datasets of both T1w brain MRIs and lung CTs. We further demonstrate how MedIL can influence the quality of images generated with a diffusion model, and discuss how MedIL can enhance generative models to resemble raw clinical acquisitions.

View on arXiv
@article{spears2025_2504.09322,
  title={ MedIL: Implicit Latent Spaces for Generating Heterogeneous Medical Images at Arbitrary Resolutions },
  author={ Tyler Spears and Shen Zhu and Yinzhu Jin and Aman Shrivastava and P. Thomas Fletcher },
  journal={arXiv preprint arXiv:2504.09322},
  year={ 2025 }
}
Comments on this paper