ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.13828
50
0

Scale-Aware Contrastive Reverse Distillation for Unsupervised Medical Anomaly Detection

18 March 2025
Chunlei Li
Yilei Shi
Jingliang Hu
Xiao Xiang Zhu
Lichao Mou
    MedIm
ArXivPDFHTML
Abstract

Unsupervised anomaly detection using deep learning has garnered significant research attention due to its broad applicability, particularly in medical imaging where labeled anomalous data are scarce. While earlier approaches leverage generative models like autoencoders and generative adversarial networks (GANs), they often fall short due to overgeneralization. Recent methods explore various strategies, including memory banks, normalizing flows, self-supervised learning, and knowledge distillation, to enhance discrimination. Among these, knowledge distillation, particularly reverse distillation, has shown promise. Following this paradigm, we propose a novel scale-aware contrastive reverse distillation model that addresses two key limitations of existing reverse distillation methods: insufficient feature discriminability and inability to handle anomaly scale variations. Specifically, we introduce a contrastive student-teacher learning approach to derive more discriminative representations by generating and exploring out-of-normal distributions. Further, we design a scale adaptation mechanism to softly weight contrastive distillation losses at different scales to account for the scale variation issue. Extensive experiments on benchmark datasets demonstrate state-of-the-art performance, validating the efficacy of the proposed method. Code is available atthis https URL.

View on arXiv
@article{li2025_2503.13828,
  title={ Scale-Aware Contrastive Reverse Distillation for Unsupervised Medical Anomaly Detection },
  author={ Chunlei Li and Yilei Shi and Jingliang Hu and Xiao Xiang Zhu and Lichao Mou },
  journal={arXiv preprint arXiv:2503.13828},
  year={ 2025 }
}
Comments on this paper