ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2203.13856
19
9

Robust deep learning for eye fundus images: Bridging real and synthetic data for enhancing generalization

25 March 2022
Guilherme C. Oliveira
Gustavo de Rosa
Daniel Carlos Guimarães Pedronette
João Paulo Papa
Himeesh Kumar
L. A. Passos
Dinesh Kumar
    OOD
    MedIm
ArXivPDFHTML
Abstract

Deep learning applications for assessing medical images are limited because the datasets are often small and imbalanced. The use of synthetic data has been proposed in the literature, but neither a robust comparison of the different methods nor generalizability has been reported. Our approach integrates a retinal image quality assessment model and StyleGAN2 architecture to enhance Age-related Macular Degeneration (AMD) detection capabilities and improve generalizability. This work compares ten different Generative Adversarial Network (GAN) architectures to generate synthetic eye-fundus images with and without AMD. We combined subsets of three public databases (iChallenge-AMD, ODIR-2019, and RIADD) to form a single training and test set. We employed the STARE dataset for external validation, ensuring a comprehensive assessment of the proposed approach. The results show that StyleGAN2 reached the lowest Frechet Inception Distance (166.17), and clinicians could not accurately differentiate between real and synthetic images. ResNet-18 architecture obtained the best performance with 85% accuracy and outperformed the two human experts (80% and 75%) in detecting AMD fundus images. The accuracy rates were 82.8% for the test set and 81.3% for the STARE dataset, demonstrating the model's generalizability. The proposed methodology for synthetic medical image generation has been validated for robustness and accuracy, with free access to its code for further research and development in this field.

View on arXiv
Comments on this paper