ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.20619
39
0

Style Content Decomposition-based Data Augmentation for Domain Generalizable Medical Image Segmentation

28 February 2025
Zhiqiang Shen
Peng Cao
Jinzhu Yang
Osmar R. Zaiane
Z. Chen
    OOD
ArXivPDFHTML
Abstract

Due to the domain shifts between training and testing medical images, learned segmentation models often experience significant performance degradation during deployment. In this paper, we first decompose an image into its style code and content map and reveal that domain shifts in medical images involve: \textbf{style shifts} (\emph{i.e.}, differences in image appearance) and \textbf{content shifts} (\emph{i.e.}, variations in anatomical structures), the latter of which has been largely overlooked. To this end, we propose \textbf{StyCona}, a \textbf{sty}le \textbf{con}tent decomposition-based data \textbf{a}ugmentation method that innovatively augments both image style and content within the rank-one space, for domain generalizable medical image segmentation. StyCona is a simple yet effective plug-and-play module that substantially improves model generalization without requiring additional training parameters or modifications to the segmentation model architecture. Experiments on cross-sequence, cross-center, and cross-modality medical image segmentation settings with increasingly severe domain shifts, demonstrate the effectiveness of StyCona and its superiority over state-of-the-arts. The code is available atthis https URL.

View on arXiv
@article{shen2025_2502.20619,
  title={ Style Content Decomposition-based Data Augmentation for Domain Generalizable Medical Image Segmentation },
  author={ Zhiqiang Shen and Peng Cao and Jinzhu Yang and Osmar R. Zaiane and Zhaolin Chen },
  journal={arXiv preprint arXiv:2502.20619},
  year={ 2025 }
}
Comments on this paper