21
0

Image Classification Using a Diffusion Model as a Pre-Training Model

Abstract

In this paper, we propose a diffusion model that integrates a representation-conditioning mechanism, where the representations derived from a Vision Transformer (ViT) are used to condition the internal process of a Transformer-based diffusion model. This approach enables representation-conditioned data generation, addressing the challenge of requiring large-scale labeled datasets by leveraging self-supervised learning on unlabeled data. We evaluate our method through a zero-shot classification task for hematoma detection in brain imaging. Compared to the strong contrastive learning baseline, DINOv2, our method achieves a notable improvement of +6.15% in accuracy and +13.60% in F1-score, demonstrating its effectiveness in image classification.

View on arXiv
@article{ukita2025_2505.06890,
  title={ Image Classification Using a Diffusion Model as a Pre-Training Model },
  author={ Kosuke Ukita and Ye Xiaolong and Tsuyoshi Okita },
  journal={arXiv preprint arXiv:2505.06890},
  year={ 2025 }
}
Comments on this paper