11
0

Unsupervised Learning for Class Distribution Mismatch

Abstract

Class distribution mismatch (CDM) refers to the discrepancy between class distributions in training data and target tasks. Previous methods address this by designing classifiers to categorize classes known during training, while grouping unknown or new classes into an "other" category. However, they focus on semi-supervised scenarios and heavily rely on labeled data, limiting their applicability and performance. To address this, we propose Unsupervised Learning for Class Distribution Mismatch (UCDM), which constructs positive-negative pairs from unlabeled data for classifier training. Our approach randomly samples images and uses a diffusion model to add or erase semantic classes, synthesizing diverse training pairs. Additionally, we introduce a confidence-based labeling mechanism that iteratively assigns pseudo-labels to valuable real-world data and incorporates them into the training process. Extensive experiments on three datasets demonstrate UCDM's superiority over previous semi-supervised methods. Specifically, with a 60% mismatch proportion on Tiny-ImageNet dataset, our approach, without relying on labeled data, surpasses OpenMatch (with 40 labels per class) by 35.1%, 63.7%, and 72.5% in classifying known, unknown, and new classes.

View on arXiv
@article{du2025_2505.06948,
  title={ Unsupervised Learning for Class Distribution Mismatch },
  author={ Pan Du and Wangbo Zhao and Xinai Lu and Nian Liu and Zhikai Li and Chaoyu Gong and Suyun Zhao and Hong Chen and Cuiping Li and Kai Wang and Yang You },
  journal={arXiv preprint arXiv:2505.06948},
  year={ 2025 }
}
Comments on this paper