ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2302.06992
38
1

Hard-aware Instance Adaptive Self-training for Unsupervised Cross-domain Semantic Segmentation

14 February 2023
Chuanglu Zhu
Kebin Liu
Wenqi Tang
Ke Mei
Jiaqi Zou
Tiejun Huang
ArXivPDFHTML
Abstract

The divergence between labeled training data and unlabeled testing data is a significant challenge for recent deep learning models. Unsupervised domain adaptation (UDA) attempts to solve such problem. Recent works show that self-training is a powerful approach to UDA. However, existing methods have difficulty in balancing the scalability and performance. In this paper, we propose a hard-aware instance adaptive self-training framework for UDA on the task of semantic segmentation. To effectively improve the quality and diversity of pseudo-labels, we develop a novel pseudo-label generation strategy with an instance adaptive selector. We further enrich the hard class pseudo-labels with inter-image information through a skillfully designed hard-aware pseudo-label augmentation. Besides, we propose the region-adaptive regularization to smooth the pseudo-label region and sharpen the non-pseudo-label region. For the non-pseudo-label region, consistency constraint is also constructed to introduce stronger supervision signals during model optimization. Our method is so concise and efficient that it is easy to be generalized to other UDA methods. Experiments on GTA5 to Cityscapes, SYNTHIA to Cityscapes, and Cityscapes to Oxford RobotCar demonstrate the superior performance of our approach compared with the state-of-the-art methods. Our codes are available atthis https URL.

View on arXiv
@article{zhu2025_2302.06992,
  title={ Hard-aware Instance Adaptive Self-training for Unsupervised Cross-domain Semantic Segmentation },
  author={ Chuang Zhu and Kebin Liu and Wenqi Tang and Ke Mei and Jiaqi Zou and Tiejun Huang },
  journal={arXiv preprint arXiv:2302.06992},
  year={ 2025 }
}
Comments on this paper