ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.09814
19
0

DUDA: Distilled Unsupervised Domain Adaptation for Lightweight Semantic Segmentation

14 April 2025
Beomseok Kang
Niluthpol Chowdhury Mithun
Abhinav Rajvanshi
Han-Pang Chiu
S. Samarasekera
ArXivPDFHTML
Abstract

Unsupervised Domain Adaptation (UDA) is essential for enabling semantic segmentation in new domains without requiring costly pixel-wise annotations. State-of-the-art (SOTA) UDA methods primarily use self-training with architecturally identical teacher and student networks, relying on Exponential Moving Average (EMA) updates. However, these approaches face substantial performance degradation with lightweight models due to inherent architectural inflexibility leading to low-quality pseudo-labels. To address this, we propose Distilled Unsupervised Domain Adaptation (DUDA), a novel framework that combines EMA-based self-training with knowledge distillation (KD). Our method employs an auxiliary student network to bridge the architectural gap between heavyweight and lightweight models for EMA-based updates, resulting in improved pseudo-label quality. DUDA employs a strategic fusion of UDA and KD, incorporating innovative elements such as gradual distillation from large to small networks, inconsistency loss prioritizing poorly adapted classes, and learning with multiple teachers. Extensive experiments across four UDA benchmarks demonstrate DUDA's superiority in achieving SOTA performance with lightweight models, often surpassing the performance of heavyweight models from other approaches.

View on arXiv
@article{kang2025_2504.09814,
  title={ DUDA: Distilled Unsupervised Domain Adaptation for Lightweight Semantic Segmentation },
  author={ Beomseok Kang and Niluthpol Chowdhury Mithun and Abhinav Rajvanshi and Han-Pang Chiu and Supun Samarasekera },
  journal={arXiv preprint arXiv:2504.09814},
  year={ 2025 }
}
Comments on this paper