26
6

Data Distribution Distilled Generative Model for Generalized Zero-Shot Recognition

Abstract

In the realm of Zero-Shot Learning (ZSL), we address biases in Generalized Zero-Shot Learning (GZSL) models, which favor seen data. To counter this, we introduce an end-to-end generative GZSL framework called D3^3GZSL. This framework respects seen and synthesized unseen data as in-distribution and out-of-distribution data, respectively, for a more balanced model. D3^3GZSL comprises two core modules: in-distribution dual space distillation (ID2^2SD) and out-of-distribution batch distillation (O2^2DBD). ID2^2SD aligns teacher-student outcomes in embedding and label spaces, enhancing learning coherence. O2^2DBD introduces low-dimensional out-of-distribution representations per batch sample, capturing shared structures between seen and unseen categories. Our approach demonstrates its effectiveness across established GZSL benchmarks, seamlessly integrating into mainstream generative frameworks. Extensive experiments consistently showcase that D3^3GZSL elevates the performance of existing generative GZSL methods, underscoring its potential to refine zero-shot learning practices.The code is available at: https://github.com/PJBQ/D3GZSL.git

View on arXiv
Comments on this paper