Data Distribution Distilled Generative Model for Generalized Zero-Shot Recognition

In the realm of Zero-Shot Learning (ZSL), we address biases in Generalized Zero-Shot Learning (GZSL) models, which favor seen data. To counter this, we introduce an end-to-end generative GZSL framework called DGZSL. This framework respects seen and synthesized unseen data as in-distribution and out-of-distribution data, respectively, for a more balanced model. DGZSL comprises two core modules: in-distribution dual space distillation (IDSD) and out-of-distribution batch distillation (ODBD). IDSD aligns teacher-student outcomes in embedding and label spaces, enhancing learning coherence. ODBD introduces low-dimensional out-of-distribution representations per batch sample, capturing shared structures between seen and unseen categories. Our approach demonstrates its effectiveness across established GZSL benchmarks, seamlessly integrating into mainstream generative frameworks. Extensive experiments consistently showcase that DGZSL elevates the performance of existing generative GZSL methods, underscoring its potential to refine zero-shot learning practices.The code is available at: https://github.com/PJBQ/D3GZSL.git
View on arXiv