ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.07904
34
0

The Efficacy of Semantics-Preserving Transformations in Self-Supervised Learning for Medical Ultrasound

10 April 2025
Blake Vanberlo
Alexander Wong
Jesse Hoey
R. Arntfield
ArXivPDFHTML
Abstract

Data augmentation is a central component of joint embedding self-supervised learning (SSL). Approaches that work for natural images may not always be effective in medical imaging tasks. This study systematically investigated the impact of data augmentation and preprocessing strategies in SSL for lung ultrasound. Three data augmentation pipelines were assessed: (1) a baseline pipeline commonly used across imaging domains, (2) a novel semantic-preserving pipeline designed for ultrasound, and (3) a distilled set of the most effective transformations from both pipelines. Pretrained models were evaluated on multiple classification tasks: B-line detection, pleural effusion detection, and COVID-19 classification. Experiments revealed that semantics-preserving data augmentation resulted in the greatest performance for COVID-19 classification - a diagnostic task requiring global image context. Cropping-based methods yielded the greatest performance on the B-line and pleural effusion object classification tasks, which require strong local pattern recognition. Lastly, semantics-preserving ultrasound image preprocessing resulted in increased downstream performance for multiple tasks. Guidance regarding data augmentation and preprocessing strategies was synthesized for practitioners working with SSL in ultrasound.

View on arXiv
@article{vanberlo2025_2504.07904,
  title={ The Efficacy of Semantics-Preserving Transformations in Self-Supervised Learning for Medical Ultrasound },
  author={ Blake VanBerlo and Alexander Wong and Jesse Hoey and Robert Arntfield },
  journal={arXiv preprint arXiv:2504.07904},
  year={ 2025 }
}
Comments on this paper