ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2003.07923
11
3

3D medical image segmentation with labeled and unlabeled data using autoencoders at the example of liver segmentation in CT images

17 March 2020
Cheryl Sital
T. Brosch
Dominique Tio
A. Raaijmakers
J. Weese
ArXivPDFHTML
Abstract

Automatic segmentation of anatomical structures with convolutional neural networks (CNNs) constitutes a large portion of research in medical image analysis. The majority of CNN-based methods rely on an abundance of labeled data for proper training. Labeled medical data is often scarce, but unlabeled data is more widely available. This necessitates approaches that go beyond traditional supervised learning and leverage unlabeled data for segmentation tasks. This work investigates the potential of autoencoder-extracted features to improve segmentation with a CNN. Two strategies were considered. First, transfer learning where pretrained autoencoder features were used as initialization for the convolutional layers in the segmentation network. Second, multi-task learning where the tasks of segmentation and feature extraction, by means of input reconstruction, were learned and optimized simultaneously. A convolutional autoencoder was used to extract features from unlabeled data and a multi-scale, fully convolutional CNN was used to perform the target task of 3D liver segmentation in CT images. For both strategies, experiments were conducted with varying amounts of labeled and unlabeled training data. The proposed learning strategies improved results in 75%75\%75% of the experiments compared to training from scratch and increased the dice score by up to 0.0400.0400.040 and 0.0240.0240.024 for a ratio of unlabeled to labeled training data of about 32:132 : 132:1 and 12.5:112.5 : 112.5:1, respectively. The results indicate that both training strategies are more effective with a large ratio of unlabeled to labeled training data.

View on arXiv
Comments on this paper